00:00:00.002 Started by upstream project "autotest-per-patch" build number 132538 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.079 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.080 The recommended git tool is: git 00:00:00.080 using credential 00000000-0000-0000-0000-000000000002 00:00:00.083 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.117 Fetching changes from the remote Git repository 00:00:00.121 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.185 Using shallow fetch with depth 1 00:00:00.185 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.185 > git --version # timeout=10 00:00:00.241 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.292 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.292 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.151 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.168 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.182 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.182 > git config core.sparsecheckout # timeout=10 00:00:05.196 > git read-tree -mu HEAD # timeout=10 00:00:05.214 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.239 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.239 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.327 [Pipeline] Start of Pipeline 00:00:05.343 [Pipeline] library 00:00:05.345 Loading library shm_lib@master 00:00:05.345 Library shm_lib@master is cached. Copying from home. 00:00:05.365 [Pipeline] node 00:00:05.383 Running on CYP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.384 [Pipeline] { 00:00:05.392 [Pipeline] catchError 00:00:05.393 [Pipeline] { 00:00:05.405 [Pipeline] wrap 00:00:05.415 [Pipeline] { 00:00:05.420 [Pipeline] stage 00:00:05.422 [Pipeline] { (Prologue) 00:00:05.738 [Pipeline] sh 00:00:06.026 + logger -p user.info -t JENKINS-CI 00:00:06.045 [Pipeline] echo 00:00:06.047 Node: CYP11 00:00:06.056 [Pipeline] sh 00:00:06.357 [Pipeline] setCustomBuildProperty 00:00:06.372 [Pipeline] echo 00:00:06.374 Cleanup processes 00:00:06.380 [Pipeline] sh 00:00:06.665 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.665 3371769 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.678 [Pipeline] sh 00:00:06.962 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.962 ++ grep -v 'sudo pgrep' 00:00:06.962 ++ awk '{print $1}' 00:00:06.962 + sudo kill -9 00:00:06.962 + true 00:00:06.977 [Pipeline] cleanWs 00:00:06.987 [WS-CLEANUP] Deleting project workspace... 00:00:06.988 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.995 [WS-CLEANUP] done 00:00:07.000 [Pipeline] setCustomBuildProperty 00:00:07.016 [Pipeline] sh 00:00:07.300 + sudo git config --global --replace-all safe.directory '*' 00:00:07.377 [Pipeline] httpRequest 00:00:07.729 [Pipeline] echo 00:00:07.731 Sorcerer 10.211.164.101 is alive 00:00:07.741 [Pipeline] retry 00:00:07.743 [Pipeline] { 00:00:07.753 [Pipeline] httpRequest 00:00:07.757 HttpMethod: GET 00:00:07.758 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.758 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.761 Response Code: HTTP/1.1 200 OK 00:00:07.762 Success: Status code 200 is in the accepted range: 200,404 00:00:07.762 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.479 [Pipeline] } 00:00:08.497 [Pipeline] // retry 00:00:08.505 [Pipeline] sh 00:00:08.793 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.811 [Pipeline] httpRequest 00:00:09.408 [Pipeline] echo 00:00:09.410 Sorcerer 10.211.164.101 is alive 00:00:09.419 [Pipeline] retry 00:00:09.421 [Pipeline] { 00:00:09.435 [Pipeline] httpRequest 00:00:09.440 HttpMethod: GET 00:00:09.440 URL: http://10.211.164.101/packages/spdk_c6092c872bfdb453670d4b89fda04e4f5a8c3465.tar.gz 00:00:09.441 Sending request to url: http://10.211.164.101/packages/spdk_c6092c872bfdb453670d4b89fda04e4f5a8c3465.tar.gz 00:00:09.443 Response Code: HTTP/1.1 200 OK 00:00:09.444 Success: Status code 200 is in the accepted range: 200,404 00:00:09.444 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c6092c872bfdb453670d4b89fda04e4f5a8c3465.tar.gz 00:00:25.905 [Pipeline] } 00:00:25.923 [Pipeline] // retry 00:00:25.931 [Pipeline] sh 00:00:26.216 + tar --no-same-owner -xf spdk_c6092c872bfdb453670d4b89fda04e4f5a8c3465.tar.gz 00:00:28.800 [Pipeline] sh 00:00:29.086 + git -C spdk log --oneline -n5 00:00:29.086 c6092c872 bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:29.086 51a65534e bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:29.086 0617ba6b2 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:00:29.086 bb877d8c1 nvmf: Expose DIF type of namespace to host again 00:00:29.086 9f3071c5f nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:00:29.098 [Pipeline] } 00:00:29.113 [Pipeline] // stage 00:00:29.122 [Pipeline] stage 00:00:29.124 [Pipeline] { (Prepare) 00:00:29.140 [Pipeline] writeFile 00:00:29.154 [Pipeline] sh 00:00:29.436 + logger -p user.info -t JENKINS-CI 00:00:29.449 [Pipeline] sh 00:00:29.732 + logger -p user.info -t JENKINS-CI 00:00:29.743 [Pipeline] sh 00:00:30.026 + cat autorun-spdk.conf 00:00:30.026 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.026 SPDK_TEST_NVMF=1 00:00:30.026 SPDK_TEST_NVME_CLI=1 00:00:30.026 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.026 SPDK_TEST_NVMF_NICS=e810 00:00:30.026 SPDK_TEST_VFIOUSER=1 00:00:30.026 SPDK_RUN_UBSAN=1 00:00:30.026 NET_TYPE=phy 00:00:30.033 RUN_NIGHTLY=0 00:00:30.038 [Pipeline] readFile 00:00:30.071 [Pipeline] withEnv 00:00:30.074 [Pipeline] { 00:00:30.090 [Pipeline] sh 00:00:30.374 + set -ex 00:00:30.374 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:30.374 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:30.374 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.374 ++ SPDK_TEST_NVMF=1 00:00:30.374 ++ SPDK_TEST_NVME_CLI=1 00:00:30.374 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:30.374 ++ SPDK_TEST_NVMF_NICS=e810 00:00:30.374 ++ SPDK_TEST_VFIOUSER=1 00:00:30.374 ++ SPDK_RUN_UBSAN=1 00:00:30.374 ++ NET_TYPE=phy 00:00:30.374 ++ RUN_NIGHTLY=0 00:00:30.374 + case $SPDK_TEST_NVMF_NICS in 00:00:30.374 + DRIVERS=ice 00:00:30.374 + [[ tcp == \r\d\m\a ]] 00:00:30.374 + [[ -n ice ]] 00:00:30.374 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:30.374 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:30.374 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:30.374 rmmod: ERROR: Module irdma is not currently loaded 00:00:30.374 rmmod: ERROR: Module i40iw is not currently loaded 00:00:30.374 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:30.374 + true 00:00:30.374 + for D in $DRIVERS 00:00:30.374 + sudo modprobe ice 00:00:30.374 + exit 0 00:00:30.383 [Pipeline] } 00:00:30.400 [Pipeline] // withEnv 00:00:30.405 [Pipeline] } 00:00:30.419 [Pipeline] // stage 00:00:30.430 [Pipeline] catchError 00:00:30.432 [Pipeline] { 00:00:30.447 [Pipeline] timeout 00:00:30.447 Timeout set to expire in 1 hr 0 min 00:00:30.449 [Pipeline] { 00:00:30.465 [Pipeline] stage 00:00:30.467 [Pipeline] { (Tests) 00:00:30.483 [Pipeline] sh 00:00:30.774 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:30.774 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:30.774 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:30.774 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:30.774 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:30.774 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:30.774 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:30.774 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:30.774 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:30.774 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:30.774 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:30.774 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:30.774 + source /etc/os-release 00:00:30.774 ++ NAME='Fedora Linux' 00:00:30.774 ++ VERSION='39 (Cloud Edition)' 00:00:30.774 ++ ID=fedora 00:00:30.774 ++ VERSION_ID=39 00:00:30.774 ++ VERSION_CODENAME= 00:00:30.774 ++ PLATFORM_ID=platform:f39 00:00:30.774 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:30.774 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:30.774 ++ LOGO=fedora-logo-icon 00:00:30.774 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:30.774 ++ HOME_URL=https://fedoraproject.org/ 00:00:30.774 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:30.774 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:30.775 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:30.775 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:30.775 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:30.775 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:30.775 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:30.775 ++ SUPPORT_END=2024-11-12 00:00:30.775 ++ VARIANT='Cloud Edition' 00:00:30.775 ++ VARIANT_ID=cloud 00:00:30.775 + uname -a 00:00:30.775 Linux spdk-cyp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:30.775 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:33.317 Hugepages 00:00:33.317 node hugesize free / total 00:00:33.317 node0 1048576kB 0 / 0 00:00:33.317 node0 2048kB 0 / 0 00:00:33.317 node1 1048576kB 0 / 0 00:00:33.317 node1 2048kB 0 / 0 00:00:33.317 00:00:33.317 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:33.317 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:33.317 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:33.317 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:33.317 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:33.317 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:33.317 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:33.317 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:33.317 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:33.317 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:33.317 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:33.317 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:33.317 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:33.317 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:33.317 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:33.317 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:33.317 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:33.317 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:33.317 + rm -f /tmp/spdk-ld-path 00:00:33.317 + source autorun-spdk.conf 00:00:33.317 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.317 ++ SPDK_TEST_NVMF=1 00:00:33.317 ++ SPDK_TEST_NVME_CLI=1 00:00:33.317 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.317 ++ SPDK_TEST_NVMF_NICS=e810 00:00:33.317 ++ SPDK_TEST_VFIOUSER=1 00:00:33.317 ++ SPDK_RUN_UBSAN=1 00:00:33.317 ++ NET_TYPE=phy 00:00:33.317 ++ RUN_NIGHTLY=0 00:00:33.317 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:33.317 + [[ -n '' ]] 00:00:33.317 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:33.317 + for M in /var/spdk/build-*-manifest.txt 00:00:33.317 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:33.317 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.317 + for M in /var/spdk/build-*-manifest.txt 00:00:33.317 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:33.317 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.317 + for M in /var/spdk/build-*-manifest.txt 00:00:33.317 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:33.317 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:33.317 ++ uname 00:00:33.317 + [[ Linux == \L\i\n\u\x ]] 00:00:33.317 + sudo dmesg -T 00:00:33.317 + sudo dmesg --clear 00:00:33.317 + dmesg_pid=3372877 00:00:33.317 + [[ Fedora Linux == FreeBSD ]] 00:00:33.317 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.317 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:33.317 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:33.317 + [[ -x /usr/src/fio-static/fio ]] 00:00:33.317 + export FIO_BIN=/usr/src/fio-static/fio 00:00:33.317 + FIO_BIN=/usr/src/fio-static/fio 00:00:33.317 + sudo dmesg -Tw 00:00:33.317 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:33.317 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:33.317 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:33.317 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.317 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:33.317 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:33.317 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.317 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:33.317 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.317 19:07:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:33.317 19:07:06 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:33.317 19:07:06 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:33.317 19:07:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:33.317 19:07:06 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:33.317 19:07:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:33.317 19:07:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:33.317 19:07:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:33.317 19:07:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:33.317 19:07:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:33.317 19:07:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:33.317 19:07:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.317 19:07:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.317 19:07:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.317 19:07:06 -- paths/export.sh@5 -- $ export PATH 00:00:33.317 19:07:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:33.317 19:07:06 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:33.317 19:07:06 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:33.317 19:07:06 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732644426.XXXXXX 00:00:33.317 19:07:06 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732644426.RVufzd 00:00:33.317 19:07:06 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:33.317 19:07:06 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:33.317 19:07:06 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:33.317 19:07:06 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:33.317 19:07:06 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:33.317 19:07:06 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:33.317 19:07:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:33.317 19:07:06 -- common/autotest_common.sh@10 -- $ set +x 00:00:33.318 19:07:06 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:33.318 19:07:06 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:33.318 19:07:06 -- pm/common@17 -- $ local monitor 00:00:33.318 19:07:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.318 19:07:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.318 19:07:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.318 19:07:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:33.318 19:07:06 -- pm/common@25 -- $ sleep 1 00:00:33.318 19:07:06 -- pm/common@21 -- $ date +%s 00:00:33.318 19:07:06 -- pm/common@21 -- $ date +%s 00:00:33.318 19:07:06 -- pm/common@21 -- $ date +%s 00:00:33.318 19:07:06 -- pm/common@21 -- $ date +%s 00:00:33.318 19:07:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732644426 00:00:33.318 19:07:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732644426 00:00:33.318 19:07:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732644426 00:00:33.318 19:07:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732644426 00:00:33.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732644426_collect-vmstat.pm.log 00:00:33.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732644426_collect-cpu-load.pm.log 00:00:33.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732644426_collect-cpu-temp.pm.log 00:00:33.318 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732644426_collect-bmc-pm.bmc.pm.log 00:00:34.301 19:07:07 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:34.301 19:07:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:34.301 19:07:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:34.301 19:07:07 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:34.301 19:07:07 -- spdk/autobuild.sh@16 -- $ date -u 00:00:34.301 Tue Nov 26 06:07:07 PM UTC 2024 00:00:34.301 19:07:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:34.301 v25.01-pre-274-gc6092c872 00:00:34.301 19:07:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:34.301 19:07:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:34.301 19:07:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:34.301 19:07:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:34.301 19:07:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:34.301 19:07:07 -- common/autotest_common.sh@10 -- $ set +x 00:00:34.301 ************************************ 00:00:34.301 START TEST ubsan 00:00:34.301 ************************************ 00:00:34.301 19:07:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:34.301 using ubsan 00:00:34.301 00:00:34.301 real 0m0.000s 00:00:34.301 user 0m0.000s 00:00:34.301 sys 0m0.000s 00:00:34.301 19:07:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:34.302 19:07:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:34.302 ************************************ 00:00:34.302 END TEST ubsan 00:00:34.302 ************************************ 00:00:34.302 19:07:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:34.302 19:07:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:34.302 19:07:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:34.302 19:07:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:34.302 19:07:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:34.302 19:07:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:34.302 19:07:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:34.302 19:07:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:34.302 19:07:07 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:34.302 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:34.302 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:34.584 Using 'verbs' RDMA provider 00:00:45.148 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:55.143 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:55.144 Creating mk/config.mk...done. 00:00:55.144 Creating mk/cc.flags.mk...done. 00:00:55.144 Type 'make' to build. 00:00:55.144 19:07:28 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:00:55.144 19:07:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:55.144 19:07:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:55.144 19:07:28 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.144 ************************************ 00:00:55.144 START TEST make 00:00:55.144 ************************************ 00:00:55.144 19:07:28 make -- common/autotest_common.sh@1129 -- $ make -j144 00:00:55.144 make[1]: Nothing to be done for 'all'. 00:00:56.085 The Meson build system 00:00:56.085 Version: 1.5.0 00:00:56.085 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:56.086 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:56.086 Build type: native build 00:00:56.086 Project name: libvfio-user 00:00:56.086 Project version: 0.0.1 00:00:56.086 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:00:56.086 C linker for the host machine: cc ld.bfd 2.40-14 00:00:56.086 Host machine cpu family: x86_64 00:00:56.086 Host machine cpu: x86_64 00:00:56.086 Run-time dependency threads found: YES 00:00:56.086 Library dl found: YES 00:00:56.086 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:00:56.086 Run-time dependency json-c found: YES 0.17 00:00:56.086 Run-time dependency cmocka found: YES 1.1.7 00:00:56.086 Program pytest-3 found: NO 00:00:56.086 Program flake8 found: NO 00:00:56.086 Program misspell-fixer found: NO 00:00:56.086 Program restructuredtext-lint found: NO 00:00:56.086 Program valgrind found: YES (/usr/bin/valgrind) 00:00:56.086 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:56.086 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:56.086 Compiler for C supports arguments -Wwrite-strings: YES 00:00:56.086 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:56.086 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:56.086 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:56.086 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:56.086 Build targets in project: 8 00:00:56.086 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:56.086 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:56.086 00:00:56.086 libvfio-user 0.0.1 00:00:56.086 00:00:56.086 User defined options 00:00:56.086 buildtype : debug 00:00:56.086 default_library: shared 00:00:56.086 libdir : /usr/local/lib 00:00:56.086 00:00:56.086 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:56.345 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:56.345 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:56.345 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:56.345 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:56.345 [4/37] Compiling C object samples/null.p/null.c.o 00:00:56.345 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:56.345 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:56.345 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:56.345 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:56.345 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:56.345 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:56.345 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:56.345 [12/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:56.345 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:56.345 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:56.345 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:56.345 [16/37] Compiling C object samples/server.p/server.c.o 00:00:56.345 [17/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:56.345 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:56.345 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:56.345 [20/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:56.345 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:56.345 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:56.345 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:56.345 [24/37] Compiling C object samples/client.p/client.c.o 00:00:56.345 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:56.346 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:56.346 [27/37] Linking target samples/client 00:00:56.346 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:56.346 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:56.346 [30/37] Linking target test/unit_tests 00:00:56.346 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:00:56.604 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:56.604 [33/37] Linking target samples/server 00:00:56.604 [34/37] Linking target samples/lspci 00:00:56.604 [35/37] Linking target samples/shadow_ioeventfd_server 00:00:56.604 [36/37] Linking target samples/gpio-pci-idio-16 00:00:56.604 [37/37] Linking target samples/null 00:00:56.604 INFO: autodetecting backend as ninja 00:00:56.604 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:56.604 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:56.865 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:56.865 ninja: no work to do. 00:01:00.157 The Meson build system 00:01:00.157 Version: 1.5.0 00:01:00.157 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:00.157 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:00.157 Build type: native build 00:01:00.157 Program cat found: YES (/usr/bin/cat) 00:01:00.157 Project name: DPDK 00:01:00.157 Project version: 24.03.0 00:01:00.157 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:00.157 C linker for the host machine: cc ld.bfd 2.40-14 00:01:00.157 Host machine cpu family: x86_64 00:01:00.157 Host machine cpu: x86_64 00:01:00.157 Message: ## Building in Developer Mode ## 00:01:00.157 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:00.157 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:00.157 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:00.157 Program python3 found: YES (/usr/bin/python3) 00:01:00.157 Program cat found: YES (/usr/bin/cat) 00:01:00.157 Compiler for C supports arguments -march=native: YES 00:01:00.157 Checking for size of "void *" : 8 00:01:00.157 Checking for size of "void *" : 8 (cached) 00:01:00.157 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:00.157 Library m found: YES 00:01:00.157 Library numa found: YES 00:01:00.157 Has header "numaif.h" : YES 00:01:00.157 Library fdt found: NO 00:01:00.157 Library execinfo found: NO 00:01:00.157 Has header "execinfo.h" : YES 00:01:00.157 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:00.157 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:00.157 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:00.157 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:00.157 Run-time dependency openssl found: YES 3.1.1 00:01:00.157 Run-time dependency libpcap found: YES 1.10.4 00:01:00.157 Has header "pcap.h" with dependency libpcap: YES 00:01:00.157 Compiler for C supports arguments -Wcast-qual: YES 00:01:00.157 Compiler for C supports arguments -Wdeprecated: YES 00:01:00.157 Compiler for C supports arguments -Wformat: YES 00:01:00.157 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:00.157 Compiler for C supports arguments -Wformat-security: NO 00:01:00.157 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:00.157 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:00.157 Compiler for C supports arguments -Wnested-externs: YES 00:01:00.157 Compiler for C supports arguments -Wold-style-definition: YES 00:01:00.157 Compiler for C supports arguments -Wpointer-arith: YES 00:01:00.157 Compiler for C supports arguments -Wsign-compare: YES 00:01:00.157 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:00.157 Compiler for C supports arguments -Wundef: YES 00:01:00.157 Compiler for C supports arguments -Wwrite-strings: YES 00:01:00.157 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:00.157 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:00.157 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:00.157 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:00.157 Program objdump found: YES (/usr/bin/objdump) 00:01:00.157 Compiler for C supports arguments -mavx512f: YES 00:01:00.157 Checking if "AVX512 checking" compiles: YES 00:01:00.157 Fetching value of define "__SSE4_2__" : 1 00:01:00.157 Fetching value of define "__AES__" : 1 00:01:00.157 Fetching value of define "__AVX__" : 1 00:01:00.157 Fetching value of define "__AVX2__" : 1 00:01:00.157 Fetching value of define "__AVX512BW__" : 1 00:01:00.157 Fetching value of define "__AVX512CD__" : 1 00:01:00.157 Fetching value of define "__AVX512DQ__" : 1 00:01:00.157 Fetching value of define "__AVX512F__" : 1 00:01:00.157 Fetching value of define "__AVX512VL__" : 1 00:01:00.157 Fetching value of define "__PCLMUL__" : 1 00:01:00.157 Fetching value of define "__RDRND__" : 1 00:01:00.158 Fetching value of define "__RDSEED__" : 1 00:01:00.158 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:00.158 Fetching value of define "__znver1__" : (undefined) 00:01:00.158 Fetching value of define "__znver2__" : (undefined) 00:01:00.158 Fetching value of define "__znver3__" : (undefined) 00:01:00.158 Fetching value of define "__znver4__" : (undefined) 00:01:00.158 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:00.158 Message: lib/log: Defining dependency "log" 00:01:00.158 Message: lib/kvargs: Defining dependency "kvargs" 00:01:00.158 Message: lib/telemetry: Defining dependency "telemetry" 00:01:00.158 Checking for function "getentropy" : NO 00:01:00.158 Message: lib/eal: Defining dependency "eal" 00:01:00.158 Message: lib/ring: Defining dependency "ring" 00:01:00.158 Message: lib/rcu: Defining dependency "rcu" 00:01:00.158 Message: lib/mempool: Defining dependency "mempool" 00:01:00.158 Message: lib/mbuf: Defining dependency "mbuf" 00:01:00.158 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:00.158 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:00.158 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:00.158 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:00.158 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:00.158 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:00.158 Compiler for C supports arguments -mpclmul: YES 00:01:00.158 Compiler for C supports arguments -maes: YES 00:01:00.158 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:00.158 Compiler for C supports arguments -mavx512bw: YES 00:01:00.158 Compiler for C supports arguments -mavx512dq: YES 00:01:00.158 Compiler for C supports arguments -mavx512vl: YES 00:01:00.158 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:00.158 Compiler for C supports arguments -mavx2: YES 00:01:00.158 Compiler for C supports arguments -mavx: YES 00:01:00.158 Message: lib/net: Defining dependency "net" 00:01:00.158 Message: lib/meter: Defining dependency "meter" 00:01:00.158 Message: lib/ethdev: Defining dependency "ethdev" 00:01:00.158 Message: lib/pci: Defining dependency "pci" 00:01:00.158 Message: lib/cmdline: Defining dependency "cmdline" 00:01:00.158 Message: lib/hash: Defining dependency "hash" 00:01:00.158 Message: lib/timer: Defining dependency "timer" 00:01:00.158 Message: lib/compressdev: Defining dependency "compressdev" 00:01:00.158 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:00.158 Message: lib/dmadev: Defining dependency "dmadev" 00:01:00.158 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:00.158 Message: lib/power: Defining dependency "power" 00:01:00.158 Message: lib/reorder: Defining dependency "reorder" 00:01:00.158 Message: lib/security: Defining dependency "security" 00:01:00.158 Has header "linux/userfaultfd.h" : YES 00:01:00.158 Has header "linux/vduse.h" : YES 00:01:00.158 Message: lib/vhost: Defining dependency "vhost" 00:01:00.158 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:00.158 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:00.158 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:00.158 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:00.158 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:00.158 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:00.158 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:00.158 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:00.158 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:00.158 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:00.158 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:00.158 Configuring doxy-api-html.conf using configuration 00:01:00.158 Configuring doxy-api-man.conf using configuration 00:01:00.158 Program mandb found: YES (/usr/bin/mandb) 00:01:00.158 Program sphinx-build found: NO 00:01:00.158 Configuring rte_build_config.h using configuration 00:01:00.158 Message: 00:01:00.158 ================= 00:01:00.158 Applications Enabled 00:01:00.158 ================= 00:01:00.158 00:01:00.158 apps: 00:01:00.158 00:01:00.158 00:01:00.158 Message: 00:01:00.158 ================= 00:01:00.158 Libraries Enabled 00:01:00.158 ================= 00:01:00.158 00:01:00.158 libs: 00:01:00.158 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:00.158 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:00.158 cryptodev, dmadev, power, reorder, security, vhost, 00:01:00.158 00:01:00.158 Message: 00:01:00.158 =============== 00:01:00.158 Drivers Enabled 00:01:00.158 =============== 00:01:00.158 00:01:00.158 common: 00:01:00.158 00:01:00.158 bus: 00:01:00.158 pci, vdev, 00:01:00.158 mempool: 00:01:00.158 ring, 00:01:00.158 dma: 00:01:00.158 00:01:00.158 net: 00:01:00.158 00:01:00.158 crypto: 00:01:00.158 00:01:00.158 compress: 00:01:00.158 00:01:00.158 vdpa: 00:01:00.158 00:01:00.158 00:01:00.158 Message: 00:01:00.158 ================= 00:01:00.158 Content Skipped 00:01:00.158 ================= 00:01:00.158 00:01:00.158 apps: 00:01:00.158 dumpcap: explicitly disabled via build config 00:01:00.158 graph: explicitly disabled via build config 00:01:00.158 pdump: explicitly disabled via build config 00:01:00.158 proc-info: explicitly disabled via build config 00:01:00.158 test-acl: explicitly disabled via build config 00:01:00.158 test-bbdev: explicitly disabled via build config 00:01:00.158 test-cmdline: explicitly disabled via build config 00:01:00.158 test-compress-perf: explicitly disabled via build config 00:01:00.158 test-crypto-perf: explicitly disabled via build config 00:01:00.158 test-dma-perf: explicitly disabled via build config 00:01:00.158 test-eventdev: explicitly disabled via build config 00:01:00.158 test-fib: explicitly disabled via build config 00:01:00.158 test-flow-perf: explicitly disabled via build config 00:01:00.158 test-gpudev: explicitly disabled via build config 00:01:00.158 test-mldev: explicitly disabled via build config 00:01:00.158 test-pipeline: explicitly disabled via build config 00:01:00.158 test-pmd: explicitly disabled via build config 00:01:00.158 test-regex: explicitly disabled via build config 00:01:00.158 test-sad: explicitly disabled via build config 00:01:00.158 test-security-perf: explicitly disabled via build config 00:01:00.158 00:01:00.158 libs: 00:01:00.158 argparse: explicitly disabled via build config 00:01:00.158 metrics: explicitly disabled via build config 00:01:00.158 acl: explicitly disabled via build config 00:01:00.158 bbdev: explicitly disabled via build config 00:01:00.158 bitratestats: explicitly disabled via build config 00:01:00.158 bpf: explicitly disabled via build config 00:01:00.158 cfgfile: explicitly disabled via build config 00:01:00.158 distributor: explicitly disabled via build config 00:01:00.158 efd: explicitly disabled via build config 00:01:00.158 eventdev: explicitly disabled via build config 00:01:00.158 dispatcher: explicitly disabled via build config 00:01:00.158 gpudev: explicitly disabled via build config 00:01:00.158 gro: explicitly disabled via build config 00:01:00.158 gso: explicitly disabled via build config 00:01:00.158 ip_frag: explicitly disabled via build config 00:01:00.158 jobstats: explicitly disabled via build config 00:01:00.158 latencystats: explicitly disabled via build config 00:01:00.158 lpm: explicitly disabled via build config 00:01:00.158 member: explicitly disabled via build config 00:01:00.158 pcapng: explicitly disabled via build config 00:01:00.158 rawdev: explicitly disabled via build config 00:01:00.158 regexdev: explicitly disabled via build config 00:01:00.158 mldev: explicitly disabled via build config 00:01:00.158 rib: explicitly disabled via build config 00:01:00.158 sched: explicitly disabled via build config 00:01:00.158 stack: explicitly disabled via build config 00:01:00.158 ipsec: explicitly disabled via build config 00:01:00.158 pdcp: explicitly disabled via build config 00:01:00.158 fib: explicitly disabled via build config 00:01:00.158 port: explicitly disabled via build config 00:01:00.158 pdump: explicitly disabled via build config 00:01:00.158 table: explicitly disabled via build config 00:01:00.158 pipeline: explicitly disabled via build config 00:01:00.158 graph: explicitly disabled via build config 00:01:00.158 node: explicitly disabled via build config 00:01:00.158 00:01:00.158 drivers: 00:01:00.158 common/cpt: not in enabled drivers build config 00:01:00.158 common/dpaax: not in enabled drivers build config 00:01:00.158 common/iavf: not in enabled drivers build config 00:01:00.158 common/idpf: not in enabled drivers build config 00:01:00.158 common/ionic: not in enabled drivers build config 00:01:00.158 common/mvep: not in enabled drivers build config 00:01:00.158 common/octeontx: not in enabled drivers build config 00:01:00.158 bus/auxiliary: not in enabled drivers build config 00:01:00.158 bus/cdx: not in enabled drivers build config 00:01:00.158 bus/dpaa: not in enabled drivers build config 00:01:00.158 bus/fslmc: not in enabled drivers build config 00:01:00.158 bus/ifpga: not in enabled drivers build config 00:01:00.158 bus/platform: not in enabled drivers build config 00:01:00.158 bus/uacce: not in enabled drivers build config 00:01:00.158 bus/vmbus: not in enabled drivers build config 00:01:00.158 common/cnxk: not in enabled drivers build config 00:01:00.158 common/mlx5: not in enabled drivers build config 00:01:00.158 common/nfp: not in enabled drivers build config 00:01:00.158 common/nitrox: not in enabled drivers build config 00:01:00.158 common/qat: not in enabled drivers build config 00:01:00.158 common/sfc_efx: not in enabled drivers build config 00:01:00.158 mempool/bucket: not in enabled drivers build config 00:01:00.158 mempool/cnxk: not in enabled drivers build config 00:01:00.158 mempool/dpaa: not in enabled drivers build config 00:01:00.158 mempool/dpaa2: not in enabled drivers build config 00:01:00.158 mempool/octeontx: not in enabled drivers build config 00:01:00.158 mempool/stack: not in enabled drivers build config 00:01:00.158 dma/cnxk: not in enabled drivers build config 00:01:00.158 dma/dpaa: not in enabled drivers build config 00:01:00.158 dma/dpaa2: not in enabled drivers build config 00:01:00.158 dma/hisilicon: not in enabled drivers build config 00:01:00.159 dma/idxd: not in enabled drivers build config 00:01:00.159 dma/ioat: not in enabled drivers build config 00:01:00.159 dma/skeleton: not in enabled drivers build config 00:01:00.159 net/af_packet: not in enabled drivers build config 00:01:00.159 net/af_xdp: not in enabled drivers build config 00:01:00.159 net/ark: not in enabled drivers build config 00:01:00.159 net/atlantic: not in enabled drivers build config 00:01:00.159 net/avp: not in enabled drivers build config 00:01:00.159 net/axgbe: not in enabled drivers build config 00:01:00.159 net/bnx2x: not in enabled drivers build config 00:01:00.159 net/bnxt: not in enabled drivers build config 00:01:00.159 net/bonding: not in enabled drivers build config 00:01:00.159 net/cnxk: not in enabled drivers build config 00:01:00.159 net/cpfl: not in enabled drivers build config 00:01:00.159 net/cxgbe: not in enabled drivers build config 00:01:00.159 net/dpaa: not in enabled drivers build config 00:01:00.159 net/dpaa2: not in enabled drivers build config 00:01:00.159 net/e1000: not in enabled drivers build config 00:01:00.159 net/ena: not in enabled drivers build config 00:01:00.159 net/enetc: not in enabled drivers build config 00:01:00.159 net/enetfec: not in enabled drivers build config 00:01:00.159 net/enic: not in enabled drivers build config 00:01:00.159 net/failsafe: not in enabled drivers build config 00:01:00.159 net/fm10k: not in enabled drivers build config 00:01:00.159 net/gve: not in enabled drivers build config 00:01:00.159 net/hinic: not in enabled drivers build config 00:01:00.159 net/hns3: not in enabled drivers build config 00:01:00.159 net/i40e: not in enabled drivers build config 00:01:00.159 net/iavf: not in enabled drivers build config 00:01:00.159 net/ice: not in enabled drivers build config 00:01:00.159 net/idpf: not in enabled drivers build config 00:01:00.159 net/igc: not in enabled drivers build config 00:01:00.159 net/ionic: not in enabled drivers build config 00:01:00.159 net/ipn3ke: not in enabled drivers build config 00:01:00.159 net/ixgbe: not in enabled drivers build config 00:01:00.159 net/mana: not in enabled drivers build config 00:01:00.159 net/memif: not in enabled drivers build config 00:01:00.159 net/mlx4: not in enabled drivers build config 00:01:00.159 net/mlx5: not in enabled drivers build config 00:01:00.159 net/mvneta: not in enabled drivers build config 00:01:00.159 net/mvpp2: not in enabled drivers build config 00:01:00.159 net/netvsc: not in enabled drivers build config 00:01:00.159 net/nfb: not in enabled drivers build config 00:01:00.159 net/nfp: not in enabled drivers build config 00:01:00.159 net/ngbe: not in enabled drivers build config 00:01:00.159 net/null: not in enabled drivers build config 00:01:00.159 net/octeontx: not in enabled drivers build config 00:01:00.159 net/octeon_ep: not in enabled drivers build config 00:01:00.159 net/pcap: not in enabled drivers build config 00:01:00.159 net/pfe: not in enabled drivers build config 00:01:00.159 net/qede: not in enabled drivers build config 00:01:00.159 net/ring: not in enabled drivers build config 00:01:00.159 net/sfc: not in enabled drivers build config 00:01:00.159 net/softnic: not in enabled drivers build config 00:01:00.159 net/tap: not in enabled drivers build config 00:01:00.159 net/thunderx: not in enabled drivers build config 00:01:00.159 net/txgbe: not in enabled drivers build config 00:01:00.159 net/vdev_netvsc: not in enabled drivers build config 00:01:00.159 net/vhost: not in enabled drivers build config 00:01:00.159 net/virtio: not in enabled drivers build config 00:01:00.159 net/vmxnet3: not in enabled drivers build config 00:01:00.159 raw/*: missing internal dependency, "rawdev" 00:01:00.159 crypto/armv8: not in enabled drivers build config 00:01:00.159 crypto/bcmfs: not in enabled drivers build config 00:01:00.159 crypto/caam_jr: not in enabled drivers build config 00:01:00.159 crypto/ccp: not in enabled drivers build config 00:01:00.159 crypto/cnxk: not in enabled drivers build config 00:01:00.159 crypto/dpaa_sec: not in enabled drivers build config 00:01:00.159 crypto/dpaa2_sec: not in enabled drivers build config 00:01:00.159 crypto/ipsec_mb: not in enabled drivers build config 00:01:00.159 crypto/mlx5: not in enabled drivers build config 00:01:00.159 crypto/mvsam: not in enabled drivers build config 00:01:00.159 crypto/nitrox: not in enabled drivers build config 00:01:00.159 crypto/null: not in enabled drivers build config 00:01:00.159 crypto/octeontx: not in enabled drivers build config 00:01:00.159 crypto/openssl: not in enabled drivers build config 00:01:00.159 crypto/scheduler: not in enabled drivers build config 00:01:00.159 crypto/uadk: not in enabled drivers build config 00:01:00.159 crypto/virtio: not in enabled drivers build config 00:01:00.159 compress/isal: not in enabled drivers build config 00:01:00.159 compress/mlx5: not in enabled drivers build config 00:01:00.159 compress/nitrox: not in enabled drivers build config 00:01:00.159 compress/octeontx: not in enabled drivers build config 00:01:00.159 compress/zlib: not in enabled drivers build config 00:01:00.159 regex/*: missing internal dependency, "regexdev" 00:01:00.159 ml/*: missing internal dependency, "mldev" 00:01:00.159 vdpa/ifc: not in enabled drivers build config 00:01:00.159 vdpa/mlx5: not in enabled drivers build config 00:01:00.159 vdpa/nfp: not in enabled drivers build config 00:01:00.159 vdpa/sfc: not in enabled drivers build config 00:01:00.159 event/*: missing internal dependency, "eventdev" 00:01:00.159 baseband/*: missing internal dependency, "bbdev" 00:01:00.159 gpu/*: missing internal dependency, "gpudev" 00:01:00.159 00:01:00.159 00:01:00.419 Build targets in project: 84 00:01:00.419 00:01:00.419 DPDK 24.03.0 00:01:00.419 00:01:00.419 User defined options 00:01:00.419 buildtype : debug 00:01:00.419 default_library : shared 00:01:00.419 libdir : lib 00:01:00.419 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:00.419 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:00.419 c_link_args : 00:01:00.419 cpu_instruction_set: native 00:01:00.419 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:00.419 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:00.419 enable_docs : false 00:01:00.419 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:00.419 enable_kmods : false 00:01:00.419 max_lcores : 128 00:01:00.419 tests : false 00:01:00.419 00:01:00.419 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:00.690 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:00.690 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:00.690 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:00.690 [3/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:00.690 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:00.690 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:00.690 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:00.690 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:00.690 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:00.690 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:00.690 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:00.690 [11/267] Linking static target lib/librte_kvargs.a 00:01:00.690 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:00.955 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:00.955 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:00.955 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:00.955 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:00.955 [17/267] Linking static target lib/librte_log.a 00:01:00.955 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:00.955 [19/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:00.955 [20/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:00.955 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:00.955 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:00.955 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:00.955 [24/267] Linking static target lib/librte_pci.a 00:01:00.955 [25/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:00.955 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:00.955 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:00.955 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:00.955 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:00.955 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:00.955 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:00.955 [32/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:00.955 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:00.955 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:00.955 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:00.955 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:01.214 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:01.214 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:01.214 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:01.214 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:01.214 [41/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:01.214 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:01.214 [43/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:01.214 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:01.214 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:01.214 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:01.214 [47/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:01.214 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:01.214 [49/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:01.214 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:01.214 [51/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.214 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:01.214 [53/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.214 [54/267] Linking static target lib/librte_telemetry.a 00:01:01.214 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:01.214 [56/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:01.214 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:01.214 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:01.214 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:01.214 [60/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:01.214 [61/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:01.214 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:01.214 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:01.214 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:01.214 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:01.214 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:01.214 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:01.214 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:01.214 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:01.214 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:01.214 [71/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:01.214 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:01.214 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:01.214 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:01.214 [75/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:01.214 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:01.214 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:01.214 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:01.214 [79/267] Linking static target lib/librte_meter.a 00:01:01.214 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:01.214 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:01.215 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:01.215 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:01.215 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:01.215 [85/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:01.215 [86/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:01.215 [87/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:01.215 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:01.215 [89/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:01.215 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:01.215 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:01.215 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:01.215 [93/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:01.215 [94/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:01.215 [95/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:01.215 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:01.215 [97/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:01.215 [98/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:01.215 [99/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:01.215 [100/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:01.474 [101/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:01.474 [102/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:01.474 [103/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:01.474 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:01.474 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:01.474 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:01.474 [107/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:01.474 [108/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:01.474 [109/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:01.474 [110/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:01.474 [111/267] Linking static target lib/librte_dmadev.a 00:01:01.474 [112/267] Linking static target lib/librte_ring.a 00:01:01.474 [113/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:01.474 [114/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:01.474 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:01.474 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:01.474 [117/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:01.474 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:01.474 [119/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:01.474 [120/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:01.474 [121/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:01.474 [122/267] Linking static target lib/librte_rcu.a 00:01:01.474 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:01.474 [124/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:01.474 [125/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:01.474 [126/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:01.474 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:01.474 [128/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:01.474 [129/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:01.474 [130/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:01.474 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:01.474 [132/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.474 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:01.474 [134/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:01.474 [135/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:01.474 [136/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:01.474 [137/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:01.474 [138/267] Linking static target lib/librte_timer.a 00:01:01.474 [139/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:01.474 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:01.474 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:01.474 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:01.474 [143/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:01.474 [144/267] Linking static target lib/librte_compressdev.a 00:01:01.474 [145/267] Linking static target lib/librte_cmdline.a 00:01:01.474 [146/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:01.474 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:01.474 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:01.474 [149/267] Linking static target lib/librte_net.a 00:01:01.474 [150/267] Linking target lib/librte_log.so.24.1 00:01:01.474 [151/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:01.474 [152/267] Linking static target lib/librte_power.a 00:01:01.474 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:01.474 [154/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:01.474 [155/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:01.474 [156/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:01.474 [157/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:01.474 [158/267] Linking static target lib/librte_reorder.a 00:01:01.474 [159/267] Linking static target lib/librte_security.a 00:01:01.474 [160/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:01.474 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:01.474 [162/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:01.474 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:01.474 [164/267] Linking static target lib/librte_mempool.a 00:01:01.474 [165/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:01.474 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:01.474 [167/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:01.474 [168/267] Linking static target lib/librte_eal.a 00:01:01.474 [169/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:01.474 [170/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:01.474 [171/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.474 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:01.474 [173/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:01.474 [174/267] Linking static target lib/librte_mbuf.a 00:01:01.474 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:01.474 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:01.474 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:01.474 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:01.474 [179/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:01.474 [180/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:01.475 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:01.475 [182/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:01.475 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:01.475 [184/267] Linking target lib/librte_kvargs.so.24.1 00:01:01.475 [185/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.475 [186/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.475 [187/267] Linking static target drivers/librte_bus_vdev.a 00:01:01.475 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:01.475 [189/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.475 [190/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.475 [191/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.475 [192/267] Linking target lib/librte_telemetry.so.24.1 00:01:01.735 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:01.735 [194/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.735 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:01.735 [196/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:01.735 [197/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:01.735 [198/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.735 [199/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.735 [200/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:01.735 [201/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:01.735 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:01.735 [203/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:01.735 [204/267] Linking static target drivers/librte_mempool_ring.a 00:01:01.735 [205/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:01.735 [206/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:01.735 [207/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:01.735 [208/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:01.735 [209/267] Linking static target drivers/librte_bus_pci.a 00:01:01.735 [210/267] Linking static target lib/librte_hash.a 00:01:01.735 [211/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:01.735 [212/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:01.735 [213/267] Linking static target lib/librte_cryptodev.a 00:01:01.735 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.735 [215/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.735 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.735 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.995 [218/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.995 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.995 [220/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.995 [221/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.995 [222/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.995 [223/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:01.995 [224/267] Linking static target lib/librte_ethdev.a 00:01:01.995 [225/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:02.254 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.824 [227/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.824 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:02.824 [229/267] Linking static target lib/librte_vhost.a 00:01:04.203 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.745 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:06.745 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.005 [233/267] Linking target lib/librte_eal.so.24.1 00:01:07.005 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:07.005 [235/267] Linking target lib/librte_ring.so.24.1 00:01:07.005 [236/267] Linking target lib/librte_meter.so.24.1 00:01:07.005 [237/267] Linking target lib/librte_timer.so.24.1 00:01:07.005 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:01:07.005 [239/267] Linking target lib/librte_dmadev.so.24.1 00:01:07.005 [240/267] Linking target lib/librte_pci.so.24.1 00:01:07.005 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:07.005 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:07.005 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:07.005 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:07.005 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:07.005 [246/267] Linking target lib/librte_rcu.so.24.1 00:01:07.005 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:01:07.005 [248/267] Linking target lib/librte_mempool.so.24.1 00:01:07.263 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:07.263 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:07.263 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:01:07.263 [252/267] Linking target lib/librte_mbuf.so.24.1 00:01:07.263 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:07.263 [254/267] Linking target lib/librte_net.so.24.1 00:01:07.263 [255/267] Linking target lib/librte_compressdev.so.24.1 00:01:07.263 [256/267] Linking target lib/librte_reorder.so.24.1 00:01:07.263 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:01:07.523 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:07.523 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:07.523 [260/267] Linking target lib/librte_security.so.24.1 00:01:07.523 [261/267] Linking target lib/librte_cmdline.so.24.1 00:01:07.523 [262/267] Linking target lib/librte_hash.so.24.1 00:01:07.523 [263/267] Linking target lib/librte_ethdev.so.24.1 00:01:07.523 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:07.523 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:07.523 [266/267] Linking target lib/librte_power.so.24.1 00:01:07.523 [267/267] Linking target lib/librte_vhost.so.24.1 00:01:07.523 INFO: autodetecting backend as ninja 00:01:07.523 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:19.766 CC lib/ut/ut.o 00:01:19.766 CC lib/ut_mock/mock.o 00:01:19.766 CC lib/log/log.o 00:01:19.766 CC lib/log/log_flags.o 00:01:19.766 CC lib/log/log_deprecated.o 00:01:19.766 LIB libspdk_ut.a 00:01:19.766 LIB libspdk_ut_mock.a 00:01:19.766 LIB libspdk_log.a 00:01:19.766 SO libspdk_ut.so.2.0 00:01:19.766 SO libspdk_ut_mock.so.6.0 00:01:19.766 SO libspdk_log.so.7.1 00:01:19.766 SYMLINK libspdk_ut.so 00:01:19.766 SYMLINK libspdk_ut_mock.so 00:01:19.766 SYMLINK libspdk_log.so 00:01:19.766 CC lib/util/base64.o 00:01:19.766 CC lib/util/bit_array.o 00:01:19.766 CC lib/util/cpuset.o 00:01:19.766 CC lib/util/crc16.o 00:01:19.766 CC lib/util/crc32.o 00:01:19.766 CC lib/util/crc32c.o 00:01:19.766 CC lib/util/crc32_ieee.o 00:01:19.766 CC lib/util/crc64.o 00:01:19.766 CC lib/util/dif.o 00:01:19.766 CC lib/util/fd_group.o 00:01:19.766 CC lib/util/file.o 00:01:19.766 CC lib/util/fd.o 00:01:19.766 CC lib/util/hexlify.o 00:01:19.766 CC lib/ioat/ioat.o 00:01:19.766 CC lib/util/iov.o 00:01:19.766 CC lib/util/pipe.o 00:01:19.766 CC lib/util/math.o 00:01:19.766 CC lib/util/net.o 00:01:19.766 CC lib/util/strerror_tls.o 00:01:19.766 CC lib/util/string.o 00:01:19.766 CC lib/util/uuid.o 00:01:19.766 CC lib/util/xor.o 00:01:19.766 CC lib/util/md5.o 00:01:19.766 CC lib/util/zipf.o 00:01:19.766 CC lib/dma/dma.o 00:01:19.766 CXX lib/trace_parser/trace.o 00:01:19.766 CC lib/vfio_user/host/vfio_user_pci.o 00:01:19.766 CC lib/vfio_user/host/vfio_user.o 00:01:19.766 LIB libspdk_dma.a 00:01:19.766 SO libspdk_dma.so.5.0 00:01:19.766 LIB libspdk_ioat.a 00:01:19.766 SYMLINK libspdk_dma.so 00:01:19.766 SO libspdk_ioat.so.7.0 00:01:19.766 SYMLINK libspdk_ioat.so 00:01:19.766 LIB libspdk_vfio_user.a 00:01:19.766 SO libspdk_vfio_user.so.5.0 00:01:19.766 SYMLINK libspdk_vfio_user.so 00:01:19.766 LIB libspdk_util.a 00:01:19.766 SO libspdk_util.so.10.1 00:01:19.766 LIB libspdk_trace_parser.a 00:01:19.766 SO libspdk_trace_parser.so.6.0 00:01:20.024 SYMLINK libspdk_util.so 00:01:20.024 SYMLINK libspdk_trace_parser.so 00:01:20.024 CC lib/conf/conf.o 00:01:20.024 CC lib/json/json_parse.o 00:01:20.024 CC lib/json/json_util.o 00:01:20.025 CC lib/env_dpdk/env.o 00:01:20.025 CC lib/env_dpdk/memory.o 00:01:20.025 CC lib/json/json_write.o 00:01:20.025 CC lib/env_dpdk/init.o 00:01:20.025 CC lib/env_dpdk/pci.o 00:01:20.025 CC lib/env_dpdk/threads.o 00:01:20.025 CC lib/env_dpdk/pci_ioat.o 00:01:20.025 CC lib/vmd/led.o 00:01:20.025 CC lib/idxd/idxd.o 00:01:20.025 CC lib/vmd/vmd.o 00:01:20.025 CC lib/rdma_utils/rdma_utils.o 00:01:20.025 CC lib/idxd/idxd_user.o 00:01:20.025 CC lib/idxd/idxd_kernel.o 00:01:20.025 CC lib/env_dpdk/pci_virtio.o 00:01:20.025 CC lib/env_dpdk/pci_vmd.o 00:01:20.025 CC lib/env_dpdk/pci_idxd.o 00:01:20.025 CC lib/env_dpdk/pci_event.o 00:01:20.025 CC lib/env_dpdk/sigbus_handler.o 00:01:20.025 CC lib/env_dpdk/pci_dpdk.o 00:01:20.025 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:20.025 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:20.284 LIB libspdk_conf.a 00:01:20.284 SO libspdk_conf.so.6.0 00:01:20.284 LIB libspdk_json.a 00:01:20.284 LIB libspdk_rdma_utils.a 00:01:20.284 SYMLINK libspdk_conf.so 00:01:20.284 SO libspdk_rdma_utils.so.1.0 00:01:20.284 SO libspdk_json.so.6.0 00:01:20.542 SYMLINK libspdk_json.so 00:01:20.543 SYMLINK libspdk_rdma_utils.so 00:01:20.543 LIB libspdk_idxd.a 00:01:20.543 CC lib/jsonrpc/jsonrpc_server.o 00:01:20.543 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:20.543 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:20.543 CC lib/jsonrpc/jsonrpc_client.o 00:01:20.543 LIB libspdk_vmd.a 00:01:20.543 CC lib/rdma_provider/common.o 00:01:20.543 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:20.543 SO libspdk_idxd.so.12.1 00:01:20.543 SO libspdk_vmd.so.6.0 00:01:20.803 SYMLINK libspdk_idxd.so 00:01:20.803 SYMLINK libspdk_vmd.so 00:01:20.803 LIB libspdk_rdma_provider.a 00:01:20.803 LIB libspdk_jsonrpc.a 00:01:20.803 SO libspdk_rdma_provider.so.7.0 00:01:20.803 SO libspdk_jsonrpc.so.6.0 00:01:20.803 SYMLINK libspdk_rdma_provider.so 00:01:20.803 SYMLINK libspdk_jsonrpc.so 00:01:21.062 CC lib/rpc/rpc.o 00:01:21.322 LIB libspdk_env_dpdk.a 00:01:21.322 LIB libspdk_rpc.a 00:01:21.322 SO libspdk_env_dpdk.so.15.1 00:01:21.322 SO libspdk_rpc.so.6.0 00:01:21.322 SYMLINK libspdk_rpc.so 00:01:21.322 SYMLINK libspdk_env_dpdk.so 00:01:21.582 CC lib/notify/notify.o 00:01:21.582 CC lib/notify/notify_rpc.o 00:01:21.582 CC lib/trace/trace.o 00:01:21.582 CC lib/trace/trace_flags.o 00:01:21.582 CC lib/trace/trace_rpc.o 00:01:21.582 CC lib/keyring/keyring.o 00:01:21.582 CC lib/keyring/keyring_rpc.o 00:01:21.841 LIB libspdk_notify.a 00:01:21.841 SO libspdk_notify.so.6.0 00:01:21.841 LIB libspdk_keyring.a 00:01:21.841 SYMLINK libspdk_notify.so 00:01:21.841 LIB libspdk_trace.a 00:01:21.841 SO libspdk_keyring.so.2.0 00:01:21.841 SO libspdk_trace.so.11.0 00:01:21.841 SYMLINK libspdk_keyring.so 00:01:21.841 SYMLINK libspdk_trace.so 00:01:22.099 CC lib/thread/thread.o 00:01:22.099 CC lib/thread/iobuf.o 00:01:22.099 CC lib/sock/sock.o 00:01:22.099 CC lib/sock/sock_rpc.o 00:01:22.360 LIB libspdk_sock.a 00:01:22.360 SO libspdk_sock.so.10.0 00:01:22.621 SYMLINK libspdk_sock.so 00:01:22.621 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:22.621 CC lib/nvme/nvme_ctrlr.o 00:01:22.621 CC lib/nvme/nvme_ns_cmd.o 00:01:22.621 CC lib/nvme/nvme_fabric.o 00:01:22.621 CC lib/nvme/nvme_ns.o 00:01:22.621 CC lib/nvme/nvme_pcie_common.o 00:01:22.621 CC lib/nvme/nvme_pcie.o 00:01:22.621 CC lib/nvme/nvme_qpair.o 00:01:22.621 CC lib/nvme/nvme.o 00:01:22.621 CC lib/nvme/nvme_quirks.o 00:01:22.621 CC lib/nvme/nvme_transport.o 00:01:22.621 CC lib/nvme/nvme_discovery.o 00:01:22.621 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:22.621 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:22.621 CC lib/nvme/nvme_tcp.o 00:01:22.621 CC lib/nvme/nvme_opal.o 00:01:22.621 CC lib/nvme/nvme_io_msg.o 00:01:22.621 CC lib/nvme/nvme_poll_group.o 00:01:22.621 CC lib/nvme/nvme_stubs.o 00:01:22.621 CC lib/nvme/nvme_zns.o 00:01:22.621 CC lib/nvme/nvme_auth.o 00:01:22.621 CC lib/nvme/nvme_cuse.o 00:01:22.621 CC lib/nvme/nvme_vfio_user.o 00:01:22.621 CC lib/nvme/nvme_rdma.o 00:01:23.560 LIB libspdk_thread.a 00:01:23.560 SO libspdk_thread.so.11.0 00:01:23.560 SYMLINK libspdk_thread.so 00:01:23.560 CC lib/virtio/virtio_vhost_user.o 00:01:23.560 CC lib/virtio/virtio.o 00:01:23.560 CC lib/virtio/virtio_vfio_user.o 00:01:23.560 CC lib/virtio/virtio_pci.o 00:01:23.560 CC lib/blob/blobstore.o 00:01:23.560 CC lib/blob/zeroes.o 00:01:23.560 CC lib/blob/request.o 00:01:23.560 CC lib/accel/accel.o 00:01:23.560 CC lib/blob/blob_bs_dev.o 00:01:23.560 CC lib/accel/accel_rpc.o 00:01:23.560 CC lib/fsdev/fsdev.o 00:01:23.560 CC lib/accel/accel_sw.o 00:01:23.560 CC lib/vfu_tgt/tgt_rpc.o 00:01:23.560 CC lib/vfu_tgt/tgt_endpoint.o 00:01:23.560 CC lib/fsdev/fsdev_io.o 00:01:23.560 CC lib/fsdev/fsdev_rpc.o 00:01:23.560 CC lib/init/json_config.o 00:01:23.560 CC lib/init/rpc.o 00:01:23.560 CC lib/init/subsystem.o 00:01:23.560 CC lib/init/subsystem_rpc.o 00:01:23.821 LIB libspdk_init.a 00:01:23.821 SO libspdk_init.so.6.0 00:01:23.821 LIB libspdk_virtio.a 00:01:23.821 LIB libspdk_vfu_tgt.a 00:01:23.821 SO libspdk_virtio.so.7.0 00:01:23.822 SYMLINK libspdk_init.so 00:01:23.822 SO libspdk_vfu_tgt.so.3.0 00:01:23.822 SYMLINK libspdk_vfu_tgt.so 00:01:23.822 SYMLINK libspdk_virtio.so 00:01:24.080 CC lib/event/app.o 00:01:24.080 CC lib/event/reactor.o 00:01:24.080 CC lib/event/log_rpc.o 00:01:24.080 CC lib/event/app_rpc.o 00:01:24.080 CC lib/event/scheduler_static.o 00:01:24.080 LIB libspdk_fsdev.a 00:01:24.080 SO libspdk_fsdev.so.2.0 00:01:24.080 SYMLINK libspdk_fsdev.so 00:01:24.340 LIB libspdk_nvme.a 00:01:24.340 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:24.340 LIB libspdk_event.a 00:01:24.340 SO libspdk_nvme.so.15.0 00:01:24.340 SO libspdk_event.so.14.0 00:01:24.340 LIB libspdk_accel.a 00:01:24.600 SO libspdk_accel.so.16.0 00:01:24.600 SYMLINK libspdk_event.so 00:01:24.600 SYMLINK libspdk_accel.so 00:01:24.600 SYMLINK libspdk_nvme.so 00:01:24.600 CC lib/bdev/bdev.o 00:01:24.600 CC lib/bdev/bdev_rpc.o 00:01:24.600 CC lib/bdev/bdev_zone.o 00:01:24.600 CC lib/bdev/part.o 00:01:24.600 CC lib/bdev/scsi_nvme.o 00:01:24.859 LIB libspdk_fuse_dispatcher.a 00:01:24.859 SO libspdk_fuse_dispatcher.so.1.0 00:01:24.859 SYMLINK libspdk_fuse_dispatcher.so 00:01:25.429 LIB libspdk_blob.a 00:01:25.429 SO libspdk_blob.so.12.0 00:01:25.429 SYMLINK libspdk_blob.so 00:01:25.688 CC lib/blobfs/blobfs.o 00:01:25.688 CC lib/blobfs/tree.o 00:01:25.688 CC lib/lvol/lvol.o 00:01:26.256 LIB libspdk_blobfs.a 00:01:26.256 SO libspdk_blobfs.so.11.0 00:01:26.256 SYMLINK libspdk_blobfs.so 00:01:26.256 LIB libspdk_lvol.a 00:01:26.256 SO libspdk_lvol.so.11.0 00:01:26.256 SYMLINK libspdk_lvol.so 00:01:26.827 LIB libspdk_bdev.a 00:01:26.827 SO libspdk_bdev.so.17.0 00:01:26.827 SYMLINK libspdk_bdev.so 00:01:27.089 CC lib/nbd/nbd.o 00:01:27.089 CC lib/nbd/nbd_rpc.o 00:01:27.089 CC lib/scsi/dev.o 00:01:27.089 CC lib/scsi/lun.o 00:01:27.089 CC lib/scsi/port.o 00:01:27.089 CC lib/scsi/scsi.o 00:01:27.089 CC lib/scsi/scsi_bdev.o 00:01:27.089 CC lib/ftl/ftl_core.o 00:01:27.089 CC lib/ftl/ftl_init.o 00:01:27.089 CC lib/scsi/scsi_pr.o 00:01:27.089 CC lib/ftl/ftl_debug.o 00:01:27.089 CC lib/ftl/ftl_layout.o 00:01:27.089 CC lib/scsi/scsi_rpc.o 00:01:27.089 CC lib/scsi/task.o 00:01:27.089 CC lib/ftl/ftl_io.o 00:01:27.089 CC lib/ftl/ftl_l2p.o 00:01:27.089 CC lib/ftl/ftl_sb.o 00:01:27.089 CC lib/ftl/ftl_l2p_flat.o 00:01:27.089 CC lib/ftl/ftl_nv_cache.o 00:01:27.089 CC lib/ftl/ftl_band.o 00:01:27.089 CC lib/ftl/ftl_band_ops.o 00:01:27.089 CC lib/ublk/ublk.o 00:01:27.089 CC lib/ublk/ublk_rpc.o 00:01:27.089 CC lib/ftl/ftl_writer.o 00:01:27.089 CC lib/ftl/ftl_rq.o 00:01:27.089 CC lib/ftl/ftl_reloc.o 00:01:27.089 CC lib/ftl/ftl_l2p_cache.o 00:01:27.089 CC lib/ftl/ftl_p2l.o 00:01:27.089 CC lib/ftl/ftl_p2l_log.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt.o 00:01:27.089 CC lib/nvmf/ctrlr.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:27.089 CC lib/nvmf/ctrlr_discovery.o 00:01:27.089 CC lib/nvmf/ctrlr_bdev.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:27.089 CC lib/nvmf/subsystem.o 00:01:27.089 CC lib/nvmf/nvmf.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:27.089 CC lib/nvmf/nvmf_rpc.o 00:01:27.089 CC lib/nvmf/transport.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:27.089 CC lib/nvmf/tcp.o 00:01:27.089 CC lib/nvmf/stubs.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:27.089 CC lib/nvmf/mdns_server.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:27.089 CC lib/nvmf/vfio_user.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:27.089 CC lib/nvmf/rdma.o 00:01:27.089 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:27.089 CC lib/ftl/utils/ftl_conf.o 00:01:27.089 CC lib/nvmf/auth.o 00:01:27.089 CC lib/ftl/utils/ftl_md.o 00:01:27.089 CC lib/ftl/utils/ftl_mempool.o 00:01:27.089 CC lib/ftl/utils/ftl_bitmap.o 00:01:27.089 CC lib/ftl/utils/ftl_property.o 00:01:27.089 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:27.089 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:27.089 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:27.089 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:27.089 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:27.089 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:27.089 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:27.089 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:27.089 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:27.089 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:27.089 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:01:27.089 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:27.089 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:01:27.089 CC lib/ftl/base/ftl_base_dev.o 00:01:27.089 CC lib/ftl/base/ftl_base_bdev.o 00:01:27.089 CC lib/ftl/ftl_trace.o 00:01:27.658 LIB libspdk_nbd.a 00:01:27.658 SO libspdk_nbd.so.7.0 00:01:27.658 LIB libspdk_scsi.a 00:01:27.658 SYMLINK libspdk_nbd.so 00:01:27.658 SO libspdk_scsi.so.9.0 00:01:27.658 SYMLINK libspdk_scsi.so 00:01:27.658 LIB libspdk_ublk.a 00:01:27.658 SO libspdk_ublk.so.3.0 00:01:27.918 SYMLINK libspdk_ublk.so 00:01:27.919 CC lib/vhost/vhost.o 00:01:27.919 CC lib/vhost/vhost_rpc.o 00:01:27.919 CC lib/vhost/vhost_scsi.o 00:01:27.919 CC lib/vhost/vhost_blk.o 00:01:27.919 CC lib/vhost/rte_vhost_user.o 00:01:27.919 CC lib/iscsi/init_grp.o 00:01:27.919 CC lib/iscsi/conn.o 00:01:27.919 CC lib/iscsi/param.o 00:01:27.919 CC lib/iscsi/iscsi.o 00:01:27.919 CC lib/iscsi/portal_grp.o 00:01:27.919 CC lib/iscsi/tgt_node.o 00:01:27.919 CC lib/iscsi/iscsi_subsystem.o 00:01:27.919 CC lib/iscsi/iscsi_rpc.o 00:01:27.919 CC lib/iscsi/task.o 00:01:27.919 LIB libspdk_ftl.a 00:01:28.204 SO libspdk_ftl.so.9.0 00:01:28.463 SYMLINK libspdk_ftl.so 00:01:28.724 LIB libspdk_vhost.a 00:01:28.724 SO libspdk_vhost.so.8.0 00:01:28.724 SYMLINK libspdk_vhost.so 00:01:28.984 LIB libspdk_nvmf.a 00:01:28.984 SO libspdk_nvmf.so.20.0 00:01:28.984 LIB libspdk_iscsi.a 00:01:28.984 SO libspdk_iscsi.so.8.0 00:01:28.984 SYMLINK libspdk_nvmf.so 00:01:29.243 SYMLINK libspdk_iscsi.so 00:01:29.504 CC module/env_dpdk/env_dpdk_rpc.o 00:01:29.504 CC module/vfu_device/vfu_virtio.o 00:01:29.504 CC module/vfu_device/vfu_virtio_scsi.o 00:01:29.504 CC module/vfu_device/vfu_virtio_fs.o 00:01:29.504 CC module/vfu_device/vfu_virtio_blk.o 00:01:29.504 CC module/vfu_device/vfu_virtio_rpc.o 00:01:29.504 CC module/sock/posix/posix.o 00:01:29.504 CC module/accel/error/accel_error.o 00:01:29.504 CC module/keyring/linux/keyring.o 00:01:29.504 CC module/keyring/linux/keyring_rpc.o 00:01:29.504 CC module/accel/error/accel_error_rpc.o 00:01:29.504 CC module/accel/ioat/accel_ioat_rpc.o 00:01:29.504 CC module/accel/ioat/accel_ioat.o 00:01:29.504 CC module/fsdev/aio/fsdev_aio.o 00:01:29.504 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:29.504 CC module/accel/iaa/accel_iaa.o 00:01:29.504 CC module/fsdev/aio/fsdev_aio_rpc.o 00:01:29.504 CC module/fsdev/aio/linux_aio_mgr.o 00:01:29.504 CC module/accel/dsa/accel_dsa.o 00:01:29.504 CC module/accel/iaa/accel_iaa_rpc.o 00:01:29.504 CC module/blob/bdev/blob_bdev.o 00:01:29.504 CC module/keyring/file/keyring.o 00:01:29.504 CC module/scheduler/gscheduler/gscheduler.o 00:01:29.504 CC module/keyring/file/keyring_rpc.o 00:01:29.504 CC module/accel/dsa/accel_dsa_rpc.o 00:01:29.504 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:29.504 LIB libspdk_env_dpdk_rpc.a 00:01:29.504 SO libspdk_env_dpdk_rpc.so.6.0 00:01:29.504 SYMLINK libspdk_env_dpdk_rpc.so 00:01:29.504 LIB libspdk_keyring_linux.a 00:01:29.504 LIB libspdk_accel_error.a 00:01:29.504 LIB libspdk_keyring_file.a 00:01:29.504 SO libspdk_keyring_linux.so.1.0 00:01:29.504 LIB libspdk_scheduler_gscheduler.a 00:01:29.765 LIB libspdk_scheduler_dpdk_governor.a 00:01:29.765 SO libspdk_accel_error.so.2.0 00:01:29.765 SO libspdk_keyring_file.so.2.0 00:01:29.765 SO libspdk_scheduler_gscheduler.so.4.0 00:01:29.765 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:29.765 LIB libspdk_accel_ioat.a 00:01:29.765 SYMLINK libspdk_keyring_linux.so 00:01:29.765 LIB libspdk_scheduler_dynamic.a 00:01:29.765 LIB libspdk_accel_iaa.a 00:01:29.765 SO libspdk_accel_ioat.so.6.0 00:01:29.765 SYMLINK libspdk_accel_error.so 00:01:29.765 SO libspdk_scheduler_dynamic.so.4.0 00:01:29.765 SYMLINK libspdk_keyring_file.so 00:01:29.765 SYMLINK libspdk_scheduler_gscheduler.so 00:01:29.765 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:29.765 SO libspdk_accel_iaa.so.3.0 00:01:29.765 SYMLINK libspdk_accel_ioat.so 00:01:29.765 SYMLINK libspdk_scheduler_dynamic.so 00:01:29.765 LIB libspdk_blob_bdev.a 00:01:29.765 LIB libspdk_accel_dsa.a 00:01:29.765 SYMLINK libspdk_accel_iaa.so 00:01:29.765 SO libspdk_blob_bdev.so.12.0 00:01:29.765 SO libspdk_accel_dsa.so.5.0 00:01:29.765 LIB libspdk_vfu_device.a 00:01:29.765 SYMLINK libspdk_blob_bdev.so 00:01:29.765 SYMLINK libspdk_accel_dsa.so 00:01:29.765 SO libspdk_vfu_device.so.3.0 00:01:29.765 SYMLINK libspdk_vfu_device.so 00:01:29.765 LIB libspdk_fsdev_aio.a 00:01:30.026 SO libspdk_fsdev_aio.so.1.0 00:01:30.026 LIB libspdk_sock_posix.a 00:01:30.026 SO libspdk_sock_posix.so.6.0 00:01:30.026 SYMLINK libspdk_fsdev_aio.so 00:01:30.026 SYMLINK libspdk_sock_posix.so 00:01:30.026 CC module/bdev/gpt/gpt.o 00:01:30.026 CC module/bdev/gpt/vbdev_gpt.o 00:01:30.026 CC module/bdev/delay/vbdev_delay.o 00:01:30.026 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:30.026 CC module/bdev/iscsi/bdev_iscsi.o 00:01:30.026 CC module/bdev/lvol/vbdev_lvol.o 00:01:30.026 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:30.026 CC module/bdev/nvme/bdev_nvme.o 00:01:30.026 CC module/bdev/ftl/bdev_ftl.o 00:01:30.026 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:30.026 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:30.026 CC module/bdev/nvme/nvme_rpc.o 00:01:30.026 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:30.026 CC module/bdev/nvme/vbdev_opal.o 00:01:30.026 CC module/bdev/nvme/bdev_mdns_client.o 00:01:30.026 CC module/bdev/raid/bdev_raid.o 00:01:30.026 CC module/bdev/split/vbdev_split.o 00:01:30.026 CC module/bdev/error/vbdev_error.o 00:01:30.026 CC module/blobfs/bdev/blobfs_bdev.o 00:01:30.026 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:30.026 CC module/bdev/raid/bdev_raid_rpc.o 00:01:30.026 CC module/bdev/error/vbdev_error_rpc.o 00:01:30.026 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:30.026 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:30.026 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:30.026 CC module/bdev/split/vbdev_split_rpc.o 00:01:30.026 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:30.026 CC module/bdev/raid/bdev_raid_sb.o 00:01:30.026 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:30.026 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:30.026 CC module/bdev/raid/concat.o 00:01:30.026 CC module/bdev/malloc/bdev_malloc.o 00:01:30.026 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:30.026 CC module/bdev/raid/raid0.o 00:01:30.026 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:30.026 CC module/bdev/raid/raid1.o 00:01:30.026 CC module/bdev/passthru/vbdev_passthru.o 00:01:30.026 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:30.026 CC module/bdev/null/bdev_null.o 00:01:30.026 CC module/bdev/null/bdev_null_rpc.o 00:01:30.026 CC module/bdev/aio/bdev_aio.o 00:01:30.026 CC module/bdev/aio/bdev_aio_rpc.o 00:01:30.285 LIB libspdk_blobfs_bdev.a 00:01:30.285 SO libspdk_blobfs_bdev.so.6.0 00:01:30.285 LIB libspdk_bdev_split.a 00:01:30.285 LIB libspdk_bdev_gpt.a 00:01:30.285 LIB libspdk_bdev_null.a 00:01:30.285 LIB libspdk_bdev_iscsi.a 00:01:30.285 SYMLINK libspdk_blobfs_bdev.so 00:01:30.285 SO libspdk_bdev_split.so.6.0 00:01:30.285 SO libspdk_bdev_gpt.so.6.0 00:01:30.285 SO libspdk_bdev_null.so.6.0 00:01:30.286 LIB libspdk_bdev_error.a 00:01:30.286 SO libspdk_bdev_iscsi.so.6.0 00:01:30.286 LIB libspdk_bdev_ftl.a 00:01:30.286 SO libspdk_bdev_error.so.6.0 00:01:30.286 SYMLINK libspdk_bdev_split.so 00:01:30.286 LIB libspdk_bdev_passthru.a 00:01:30.286 SYMLINK libspdk_bdev_gpt.so 00:01:30.286 SO libspdk_bdev_ftl.so.6.0 00:01:30.286 SYMLINK libspdk_bdev_null.so 00:01:30.286 LIB libspdk_bdev_delay.a 00:01:30.286 SO libspdk_bdev_passthru.so.6.0 00:01:30.286 SYMLINK libspdk_bdev_iscsi.so 00:01:30.286 LIB libspdk_bdev_zone_block.a 00:01:30.545 LIB libspdk_bdev_aio.a 00:01:30.545 SYMLINK libspdk_bdev_error.so 00:01:30.545 SO libspdk_bdev_delay.so.6.0 00:01:30.545 SO libspdk_bdev_zone_block.so.6.0 00:01:30.545 LIB libspdk_bdev_malloc.a 00:01:30.545 SO libspdk_bdev_aio.so.6.0 00:01:30.545 SYMLINK libspdk_bdev_ftl.so 00:01:30.545 SYMLINK libspdk_bdev_passthru.so 00:01:30.545 SO libspdk_bdev_malloc.so.6.0 00:01:30.545 SYMLINK libspdk_bdev_zone_block.so 00:01:30.545 SYMLINK libspdk_bdev_delay.so 00:01:30.545 SYMLINK libspdk_bdev_aio.so 00:01:30.545 SYMLINK libspdk_bdev_malloc.so 00:01:30.545 LIB libspdk_bdev_lvol.a 00:01:30.545 SO libspdk_bdev_lvol.so.6.0 00:01:30.545 LIB libspdk_bdev_virtio.a 00:01:30.545 SO libspdk_bdev_virtio.so.6.0 00:01:30.545 SYMLINK libspdk_bdev_lvol.so 00:01:30.545 SYMLINK libspdk_bdev_virtio.so 00:01:31.115 LIB libspdk_bdev_raid.a 00:01:31.115 SO libspdk_bdev_raid.so.6.0 00:01:31.115 SYMLINK libspdk_bdev_raid.so 00:01:32.055 LIB libspdk_bdev_nvme.a 00:01:32.055 SO libspdk_bdev_nvme.so.7.1 00:01:32.055 SYMLINK libspdk_bdev_nvme.so 00:01:32.625 CC module/event/subsystems/iobuf/iobuf.o 00:01:32.625 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:32.625 CC module/event/subsystems/sock/sock.o 00:01:32.625 CC module/event/subsystems/fsdev/fsdev.o 00:01:32.625 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:32.625 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:32.625 CC module/event/subsystems/keyring/keyring.o 00:01:32.625 CC module/event/subsystems/scheduler/scheduler.o 00:01:32.625 CC module/event/subsystems/vmd/vmd.o 00:01:32.625 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:32.625 LIB libspdk_event_fsdev.a 00:01:32.625 LIB libspdk_event_vfu_tgt.a 00:01:32.625 SO libspdk_event_fsdev.so.1.0 00:01:32.625 SO libspdk_event_vfu_tgt.so.3.0 00:01:32.625 LIB libspdk_event_vhost_blk.a 00:01:32.625 LIB libspdk_event_sock.a 00:01:32.625 LIB libspdk_event_keyring.a 00:01:32.625 LIB libspdk_event_iobuf.a 00:01:32.625 LIB libspdk_event_scheduler.a 00:01:32.625 LIB libspdk_event_vmd.a 00:01:32.625 SO libspdk_event_sock.so.5.0 00:01:32.625 SO libspdk_event_vhost_blk.so.3.0 00:01:32.625 SO libspdk_event_keyring.so.1.0 00:01:32.625 SO libspdk_event_iobuf.so.3.0 00:01:32.625 SO libspdk_event_scheduler.so.4.0 00:01:32.625 SO libspdk_event_vmd.so.6.0 00:01:32.625 SYMLINK libspdk_event_fsdev.so 00:01:32.625 SYMLINK libspdk_event_vfu_tgt.so 00:01:32.625 SYMLINK libspdk_event_keyring.so 00:01:32.625 SYMLINK libspdk_event_vhost_blk.so 00:01:32.625 SYMLINK libspdk_event_sock.so 00:01:32.625 SYMLINK libspdk_event_scheduler.so 00:01:32.625 SYMLINK libspdk_event_vmd.so 00:01:32.625 SYMLINK libspdk_event_iobuf.so 00:01:32.886 CC module/event/subsystems/accel/accel.o 00:01:32.886 LIB libspdk_event_accel.a 00:01:32.886 SO libspdk_event_accel.so.6.0 00:01:33.146 SYMLINK libspdk_event_accel.so 00:01:33.146 CC module/event/subsystems/bdev/bdev.o 00:01:33.405 LIB libspdk_event_bdev.a 00:01:33.405 SO libspdk_event_bdev.so.6.0 00:01:33.405 SYMLINK libspdk_event_bdev.so 00:01:33.665 CC module/event/subsystems/nbd/nbd.o 00:01:33.665 CC module/event/subsystems/scsi/scsi.o 00:01:33.665 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:33.665 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:33.665 CC module/event/subsystems/ublk/ublk.o 00:01:33.665 LIB libspdk_event_scsi.a 00:01:33.665 LIB libspdk_event_nbd.a 00:01:33.665 LIB libspdk_event_ublk.a 00:01:33.665 SO libspdk_event_scsi.so.6.0 00:01:33.665 SO libspdk_event_nbd.so.6.0 00:01:33.925 SO libspdk_event_ublk.so.3.0 00:01:33.925 SYMLINK libspdk_event_scsi.so 00:01:33.925 SYMLINK libspdk_event_nbd.so 00:01:33.925 SYMLINK libspdk_event_ublk.so 00:01:33.925 LIB libspdk_event_nvmf.a 00:01:33.925 SO libspdk_event_nvmf.so.6.0 00:01:33.925 SYMLINK libspdk_event_nvmf.so 00:01:33.925 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:33.925 CC module/event/subsystems/iscsi/iscsi.o 00:01:34.184 LIB libspdk_event_vhost_scsi.a 00:01:34.184 LIB libspdk_event_iscsi.a 00:01:34.184 SO libspdk_event_vhost_scsi.so.3.0 00:01:34.184 SO libspdk_event_iscsi.so.6.0 00:01:34.184 SYMLINK libspdk_event_vhost_scsi.so 00:01:34.184 SYMLINK libspdk_event_iscsi.so 00:01:34.443 SO libspdk.so.6.0 00:01:34.443 SYMLINK libspdk.so 00:01:34.443 CXX app/trace/trace.o 00:01:34.443 CC app/spdk_top/spdk_top.o 00:01:34.443 CC app/trace_record/trace_record.o 00:01:34.443 TEST_HEADER include/spdk/accel.h 00:01:34.443 TEST_HEADER include/spdk/accel_module.h 00:01:34.443 TEST_HEADER include/spdk/assert.h 00:01:34.443 CC test/rpc_client/rpc_client_test.o 00:01:34.443 TEST_HEADER include/spdk/barrier.h 00:01:34.443 TEST_HEADER include/spdk/base64.h 00:01:34.443 CC app/spdk_lspci/spdk_lspci.o 00:01:34.443 TEST_HEADER include/spdk/bdev_module.h 00:01:34.443 CC app/spdk_nvme_discover/discovery_aer.o 00:01:34.443 TEST_HEADER include/spdk/bdev_zone.h 00:01:34.443 TEST_HEADER include/spdk/bit_array.h 00:01:34.443 TEST_HEADER include/spdk/bdev.h 00:01:34.443 TEST_HEADER include/spdk/bit_pool.h 00:01:34.443 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:34.443 TEST_HEADER include/spdk/blob_bdev.h 00:01:34.443 TEST_HEADER include/spdk/blob.h 00:01:34.443 TEST_HEADER include/spdk/conf.h 00:01:34.443 CC app/spdk_nvme_perf/perf.o 00:01:34.443 TEST_HEADER include/spdk/config.h 00:01:34.443 TEST_HEADER include/spdk/blobfs.h 00:01:34.443 TEST_HEADER include/spdk/cpuset.h 00:01:34.443 TEST_HEADER include/spdk/crc16.h 00:01:34.443 TEST_HEADER include/spdk/crc32.h 00:01:34.443 TEST_HEADER include/spdk/crc64.h 00:01:34.443 TEST_HEADER include/spdk/dif.h 00:01:34.443 TEST_HEADER include/spdk/dma.h 00:01:34.443 TEST_HEADER include/spdk/endian.h 00:01:34.443 TEST_HEADER include/spdk/env_dpdk.h 00:01:34.443 TEST_HEADER include/spdk/env.h 00:01:34.443 TEST_HEADER include/spdk/event.h 00:01:34.443 TEST_HEADER include/spdk/fd_group.h 00:01:34.443 TEST_HEADER include/spdk/fd.h 00:01:34.443 TEST_HEADER include/spdk/fsdev.h 00:01:34.443 TEST_HEADER include/spdk/fsdev_module.h 00:01:34.443 CC app/spdk_nvme_identify/identify.o 00:01:34.443 TEST_HEADER include/spdk/ftl.h 00:01:34.443 TEST_HEADER include/spdk/file.h 00:01:34.443 TEST_HEADER include/spdk/fuse_dispatcher.h 00:01:34.443 TEST_HEADER include/spdk/gpt_spec.h 00:01:34.443 TEST_HEADER include/spdk/hexlify.h 00:01:34.443 TEST_HEADER include/spdk/histogram_data.h 00:01:34.444 TEST_HEADER include/spdk/idxd.h 00:01:34.444 TEST_HEADER include/spdk/idxd_spec.h 00:01:34.444 TEST_HEADER include/spdk/init.h 00:01:34.444 TEST_HEADER include/spdk/ioat.h 00:01:34.444 TEST_HEADER include/spdk/ioat_spec.h 00:01:34.444 TEST_HEADER include/spdk/iscsi_spec.h 00:01:34.444 TEST_HEADER include/spdk/json.h 00:01:34.444 TEST_HEADER include/spdk/jsonrpc.h 00:01:34.444 TEST_HEADER include/spdk/keyring.h 00:01:34.444 TEST_HEADER include/spdk/keyring_module.h 00:01:34.444 TEST_HEADER include/spdk/log.h 00:01:34.444 TEST_HEADER include/spdk/likely.h 00:01:34.444 TEST_HEADER include/spdk/lvol.h 00:01:34.444 TEST_HEADER include/spdk/md5.h 00:01:34.444 TEST_HEADER include/spdk/memory.h 00:01:34.444 TEST_HEADER include/spdk/mmio.h 00:01:34.444 TEST_HEADER include/spdk/nbd.h 00:01:34.444 TEST_HEADER include/spdk/net.h 00:01:34.444 TEST_HEADER include/spdk/notify.h 00:01:34.444 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:34.444 TEST_HEADER include/spdk/nvme.h 00:01:34.444 TEST_HEADER include/spdk/nvme_intel.h 00:01:34.444 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:34.444 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:34.444 CC app/iscsi_tgt/iscsi_tgt.o 00:01:34.444 TEST_HEADER include/spdk/nvme_zns.h 00:01:34.444 TEST_HEADER include/spdk/nvme_spec.h 00:01:34.444 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:34.444 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:34.444 TEST_HEADER include/spdk/nvmf.h 00:01:34.444 TEST_HEADER include/spdk/nvmf_spec.h 00:01:34.704 TEST_HEADER include/spdk/nvmf_transport.h 00:01:34.704 CC app/spdk_dd/spdk_dd.o 00:01:34.705 TEST_HEADER include/spdk/opal.h 00:01:34.705 TEST_HEADER include/spdk/opal_spec.h 00:01:34.705 TEST_HEADER include/spdk/pci_ids.h 00:01:34.705 TEST_HEADER include/spdk/pipe.h 00:01:34.705 TEST_HEADER include/spdk/queue.h 00:01:34.705 TEST_HEADER include/spdk/reduce.h 00:01:34.705 TEST_HEADER include/spdk/rpc.h 00:01:34.705 TEST_HEADER include/spdk/scheduler.h 00:01:34.705 CC app/nvmf_tgt/nvmf_main.o 00:01:34.705 TEST_HEADER include/spdk/scsi.h 00:01:34.705 TEST_HEADER include/spdk/scsi_spec.h 00:01:34.705 TEST_HEADER include/spdk/sock.h 00:01:34.705 TEST_HEADER include/spdk/stdinc.h 00:01:34.705 TEST_HEADER include/spdk/string.h 00:01:34.705 TEST_HEADER include/spdk/thread.h 00:01:34.705 TEST_HEADER include/spdk/trace.h 00:01:34.705 TEST_HEADER include/spdk/trace_parser.h 00:01:34.705 TEST_HEADER include/spdk/tree.h 00:01:34.705 TEST_HEADER include/spdk/ublk.h 00:01:34.705 TEST_HEADER include/spdk/util.h 00:01:34.705 TEST_HEADER include/spdk/uuid.h 00:01:34.705 TEST_HEADER include/spdk/version.h 00:01:34.705 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:34.705 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:34.705 TEST_HEADER include/spdk/vhost.h 00:01:34.705 TEST_HEADER include/spdk/vmd.h 00:01:34.705 TEST_HEADER include/spdk/xor.h 00:01:34.705 TEST_HEADER include/spdk/zipf.h 00:01:34.705 CXX test/cpp_headers/accel.o 00:01:34.705 CXX test/cpp_headers/accel_module.o 00:01:34.705 CXX test/cpp_headers/assert.o 00:01:34.705 CXX test/cpp_headers/barrier.o 00:01:34.705 CXX test/cpp_headers/base64.o 00:01:34.705 CXX test/cpp_headers/bdev.o 00:01:34.705 CXX test/cpp_headers/bdev_module.o 00:01:34.705 CXX test/cpp_headers/bdev_zone.o 00:01:34.705 CXX test/cpp_headers/bit_pool.o 00:01:34.705 CXX test/cpp_headers/bit_array.o 00:01:34.705 CXX test/cpp_headers/blob_bdev.o 00:01:34.705 CXX test/cpp_headers/blobfs_bdev.o 00:01:34.705 CXX test/cpp_headers/blobfs.o 00:01:34.705 CXX test/cpp_headers/blob.o 00:01:34.705 CXX test/cpp_headers/conf.o 00:01:34.705 CC app/spdk_tgt/spdk_tgt.o 00:01:34.705 CXX test/cpp_headers/config.o 00:01:34.705 CXX test/cpp_headers/crc16.o 00:01:34.705 CXX test/cpp_headers/cpuset.o 00:01:34.705 CXX test/cpp_headers/crc32.o 00:01:34.705 CXX test/cpp_headers/dif.o 00:01:34.705 CXX test/cpp_headers/crc64.o 00:01:34.705 CXX test/cpp_headers/dma.o 00:01:34.705 CXX test/cpp_headers/endian.o 00:01:34.705 CXX test/cpp_headers/env_dpdk.o 00:01:34.705 CXX test/cpp_headers/env.o 00:01:34.705 CXX test/cpp_headers/event.o 00:01:34.705 CXX test/cpp_headers/fd_group.o 00:01:34.705 CXX test/cpp_headers/fd.o 00:01:34.705 CXX test/cpp_headers/file.o 00:01:34.705 CXX test/cpp_headers/fsdev.o 00:01:34.705 CXX test/cpp_headers/fsdev_module.o 00:01:34.705 CXX test/cpp_headers/ftl.o 00:01:34.705 CXX test/cpp_headers/gpt_spec.o 00:01:34.705 CXX test/cpp_headers/fuse_dispatcher.o 00:01:34.705 CXX test/cpp_headers/histogram_data.o 00:01:34.705 CXX test/cpp_headers/hexlify.o 00:01:34.705 CXX test/cpp_headers/idxd.o 00:01:34.705 CXX test/cpp_headers/idxd_spec.o 00:01:34.705 CXX test/cpp_headers/init.o 00:01:34.705 CXX test/cpp_headers/ioat.o 00:01:34.705 CXX test/cpp_headers/iscsi_spec.o 00:01:34.705 CXX test/cpp_headers/json.o 00:01:34.705 CXX test/cpp_headers/ioat_spec.o 00:01:34.705 CXX test/cpp_headers/keyring_module.o 00:01:34.705 CXX test/cpp_headers/keyring.o 00:01:34.705 CXX test/cpp_headers/jsonrpc.o 00:01:34.705 CXX test/cpp_headers/likely.o 00:01:34.705 CXX test/cpp_headers/lvol.o 00:01:34.705 CXX test/cpp_headers/log.o 00:01:34.705 CXX test/cpp_headers/nbd.o 00:01:34.705 CXX test/cpp_headers/md5.o 00:01:34.705 CXX test/cpp_headers/memory.o 00:01:34.705 CXX test/cpp_headers/mmio.o 00:01:34.705 CXX test/cpp_headers/notify.o 00:01:34.705 CXX test/cpp_headers/net.o 00:01:34.705 CXX test/cpp_headers/nvme_intel.o 00:01:34.705 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:34.705 CXX test/cpp_headers/nvme.o 00:01:34.705 CC test/thread/poller_perf/poller_perf.o 00:01:34.705 CXX test/cpp_headers/nvme_ocssd.o 00:01:34.705 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:34.705 CXX test/cpp_headers/nvme_spec.o 00:01:34.705 CXX test/cpp_headers/nvmf.o 00:01:34.705 CXX test/cpp_headers/nvme_zns.o 00:01:34.705 CXX test/cpp_headers/nvmf_cmd.o 00:01:34.705 CXX test/cpp_headers/nvmf_spec.o 00:01:34.705 CXX test/cpp_headers/nvmf_transport.o 00:01:34.705 CXX test/cpp_headers/opal_spec.o 00:01:34.705 CXX test/cpp_headers/opal.o 00:01:34.705 CXX test/cpp_headers/pipe.o 00:01:34.705 CXX test/cpp_headers/pci_ids.o 00:01:34.705 CC examples/util/zipf/zipf.o 00:01:34.705 CXX test/cpp_headers/queue.o 00:01:34.705 CC examples/ioat/perf/perf.o 00:01:34.705 CXX test/cpp_headers/reduce.o 00:01:34.705 CXX test/cpp_headers/rpc.o 00:01:34.705 CXX test/cpp_headers/scsi.o 00:01:34.705 CXX test/cpp_headers/scheduler.o 00:01:34.705 CXX test/cpp_headers/scsi_spec.o 00:01:34.705 CXX test/cpp_headers/stdinc.o 00:01:34.705 CXX test/cpp_headers/sock.o 00:01:34.705 CC test/env/vtophys/vtophys.o 00:01:34.705 CXX test/cpp_headers/thread.o 00:01:34.705 CXX test/cpp_headers/trace.o 00:01:34.705 CXX test/cpp_headers/string.o 00:01:34.705 CC test/app/histogram_perf/histogram_perf.o 00:01:34.705 CXX test/cpp_headers/trace_parser.o 00:01:34.705 CC app/fio/nvme/fio_plugin.o 00:01:34.705 CXX test/cpp_headers/tree.o 00:01:34.705 CXX test/cpp_headers/ublk.o 00:01:34.705 CC test/app/stub/stub.o 00:01:34.705 CXX test/cpp_headers/version.o 00:01:34.705 CXX test/cpp_headers/util.o 00:01:34.705 CXX test/cpp_headers/uuid.o 00:01:34.705 CXX test/cpp_headers/vfio_user_spec.o 00:01:34.705 CC test/app/jsoncat/jsoncat.o 00:01:34.705 CXX test/cpp_headers/vfio_user_pci.o 00:01:34.705 CXX test/cpp_headers/xor.o 00:01:34.705 CXX test/cpp_headers/vmd.o 00:01:34.705 CXX test/cpp_headers/vhost.o 00:01:34.705 CXX test/cpp_headers/zipf.o 00:01:34.705 CC examples/ioat/verify/verify.o 00:01:34.705 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:34.705 CC test/env/pci/pci_ut.o 00:01:34.705 CC test/env/memory/memory_ut.o 00:01:34.705 CC test/dma/test_dma/test_dma.o 00:01:34.705 CC app/fio/bdev/fio_plugin.o 00:01:34.705 CC test/app/bdev_svc/bdev_svc.o 00:01:34.966 LINK spdk_lspci 00:01:34.966 LINK rpc_client_test 00:01:34.966 LINK spdk_nvme_discover 00:01:34.966 LINK nvmf_tgt 00:01:34.966 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:34.966 LINK iscsi_tgt 00:01:34.966 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:34.966 CC test/env/mem_callbacks/mem_callbacks.o 00:01:34.966 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:34.966 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:35.226 LINK interrupt_tgt 00:01:35.226 LINK spdk_dd 00:01:35.226 LINK env_dpdk_post_init 00:01:35.226 LINK poller_perf 00:01:35.226 LINK spdk_trace_record 00:01:35.226 LINK jsoncat 00:01:35.226 LINK spdk_tgt 00:01:35.226 LINK vtophys 00:01:35.226 LINK histogram_perf 00:01:35.226 LINK zipf 00:01:35.226 LINK stub 00:01:35.484 LINK bdev_svc 00:01:35.484 LINK ioat_perf 00:01:35.484 LINK verify 00:01:35.484 CC test/event/reactor/reactor.o 00:01:35.484 CC test/event/event_perf/event_perf.o 00:01:35.484 CC test/event/reactor_perf/reactor_perf.o 00:01:35.484 LINK spdk_nvme_perf 00:01:35.484 CC test/event/app_repeat/app_repeat.o 00:01:35.484 LINK spdk_trace 00:01:35.484 LINK pci_ut 00:01:35.484 CC test/event/scheduler/scheduler.o 00:01:35.484 LINK spdk_bdev 00:01:35.484 CC examples/vmd/led/led.o 00:01:35.484 LINK vhost_fuzz 00:01:35.484 CC examples/vmd/lsvmd/lsvmd.o 00:01:35.743 CC examples/sock/hello_world/hello_sock.o 00:01:35.743 LINK test_dma 00:01:35.743 CC examples/idxd/perf/perf.o 00:01:35.743 CC examples/thread/thread/thread_ex.o 00:01:35.743 LINK reactor 00:01:35.743 LINK event_perf 00:01:35.743 LINK spdk_nvme 00:01:35.743 LINK reactor_perf 00:01:35.743 LINK app_repeat 00:01:35.743 LINK nvme_fuzz 00:01:35.743 LINK mem_callbacks 00:01:35.743 LINK lsvmd 00:01:35.743 LINK led 00:01:35.743 LINK spdk_nvme_identify 00:01:35.743 LINK hello_sock 00:01:35.743 CC app/vhost/vhost.o 00:01:35.743 LINK thread 00:01:35.743 LINK spdk_top 00:01:35.743 LINK scheduler 00:01:35.743 LINK idxd_perf 00:01:36.001 CC test/nvme/reset/reset.o 00:01:36.001 CC test/nvme/overhead/overhead.o 00:01:36.001 CC test/nvme/simple_copy/simple_copy.o 00:01:36.001 CC test/nvme/boot_partition/boot_partition.o 00:01:36.001 CC test/nvme/sgl/sgl.o 00:01:36.001 CC test/nvme/fused_ordering/fused_ordering.o 00:01:36.001 CC test/nvme/reserve/reserve.o 00:01:36.001 CC test/nvme/err_injection/err_injection.o 00:01:36.001 CC test/nvme/connect_stress/connect_stress.o 00:01:36.001 CC test/nvme/aer/aer.o 00:01:36.001 CC test/nvme/startup/startup.o 00:01:36.001 LINK vhost 00:01:36.001 CC test/nvme/compliance/nvme_compliance.o 00:01:36.001 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:36.001 CC test/nvme/fdp/fdp.o 00:01:36.001 CC test/nvme/e2edp/nvme_dp.o 00:01:36.001 CC test/nvme/cuse/cuse.o 00:01:36.001 CC test/accel/dif/dif.o 00:01:36.001 CC test/blobfs/mkfs/mkfs.o 00:01:36.001 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:36.001 CC examples/nvme/hello_world/hello_world.o 00:01:36.001 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:36.001 CC examples/nvme/arbitration/arbitration.o 00:01:36.001 CC examples/nvme/reconnect/reconnect.o 00:01:36.001 CC test/lvol/esnap/esnap.o 00:01:36.001 CC examples/nvme/hotplug/hotplug.o 00:01:36.001 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:36.001 CC examples/nvme/abort/abort.o 00:01:36.001 LINK memory_ut 00:01:36.001 LINK connect_stress 00:01:36.001 LINK fused_ordering 00:01:36.001 LINK reserve 00:01:36.259 LINK boot_partition 00:01:36.259 LINK startup 00:01:36.259 LINK err_injection 00:01:36.259 LINK mkfs 00:01:36.259 LINK doorbell_aers 00:01:36.259 LINK sgl 00:01:36.259 CC examples/accel/perf/accel_perf.o 00:01:36.259 LINK nvme_dp 00:01:36.259 LINK overhead 00:01:36.259 CC examples/fsdev/hello_world/hello_fsdev.o 00:01:36.259 LINK aer 00:01:36.259 LINK nvme_compliance 00:01:36.259 CC examples/blob/cli/blobcli.o 00:01:36.259 LINK simple_copy 00:01:36.259 CC examples/blob/hello_world/hello_blob.o 00:01:36.259 LINK cmb_copy 00:01:36.259 LINK hello_world 00:01:36.259 LINK reset 00:01:36.259 LINK pmr_persistence 00:01:36.259 LINK hotplug 00:01:36.259 LINK fdp 00:01:36.259 LINK abort 00:01:36.259 LINK arbitration 00:01:36.259 LINK reconnect 00:01:36.259 LINK hello_fsdev 00:01:36.518 LINK hello_blob 00:01:36.518 LINK nvme_manage 00:01:36.518 LINK iscsi_fuzz 00:01:36.518 LINK blobcli 00:01:36.518 LINK dif 00:01:36.518 LINK accel_perf 00:01:36.783 CC test/bdev/bdevio/bdevio.o 00:01:36.783 CC examples/bdev/hello_world/hello_bdev.o 00:01:36.783 CC examples/bdev/bdevperf/bdevperf.o 00:01:37.077 LINK cuse 00:01:37.078 LINK bdevio 00:01:37.078 LINK hello_bdev 00:01:37.644 LINK bdevperf 00:01:37.903 CC examples/nvmf/nvmf/nvmf.o 00:01:38.160 LINK nvmf 00:01:39.536 LINK esnap 00:01:39.796 00:01:39.796 real 0m45.152s 00:01:39.796 user 6m29.523s 00:01:39.796 sys 3m34.199s 00:01:39.796 19:08:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:39.796 19:08:13 make -- common/autotest_common.sh@10 -- $ set +x 00:01:39.796 ************************************ 00:01:39.796 END TEST make 00:01:39.796 ************************************ 00:01:39.796 19:08:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:39.796 19:08:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:39.796 19:08:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:39.796 19:08:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.796 19:08:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:39.796 19:08:13 -- pm/common@44 -- $ pid=3372919 00:01:39.796 19:08:13 -- pm/common@50 -- $ kill -TERM 3372919 00:01:39.796 19:08:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.796 19:08:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:39.796 19:08:13 -- pm/common@44 -- $ pid=3372920 00:01:39.796 19:08:13 -- pm/common@50 -- $ kill -TERM 3372920 00:01:39.796 19:08:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.796 19:08:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:39.796 19:08:13 -- pm/common@44 -- $ pid=3372921 00:01:39.796 19:08:13 -- pm/common@50 -- $ kill -TERM 3372921 00:01:39.796 19:08:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.796 19:08:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:39.796 19:08:13 -- pm/common@44 -- $ pid=3372949 00:01:39.796 19:08:13 -- pm/common@50 -- $ sudo -E kill -TERM 3372949 00:01:39.796 19:08:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:01:39.796 19:08:13 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:39.796 19:08:13 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:01:39.796 19:08:13 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:01:39.796 19:08:13 -- common/autotest_common.sh@1693 -- # lcov --version 00:01:39.796 19:08:13 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:01:39.796 19:08:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:01:39.796 19:08:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:01:39.796 19:08:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:01:39.796 19:08:13 -- scripts/common.sh@336 -- # IFS=.-: 00:01:39.796 19:08:13 -- scripts/common.sh@336 -- # read -ra ver1 00:01:39.796 19:08:13 -- scripts/common.sh@337 -- # IFS=.-: 00:01:39.796 19:08:13 -- scripts/common.sh@337 -- # read -ra ver2 00:01:39.796 19:08:13 -- scripts/common.sh@338 -- # local 'op=<' 00:01:39.796 19:08:13 -- scripts/common.sh@340 -- # ver1_l=2 00:01:39.796 19:08:13 -- scripts/common.sh@341 -- # ver2_l=1 00:01:39.796 19:08:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:01:39.796 19:08:13 -- scripts/common.sh@344 -- # case "$op" in 00:01:39.796 19:08:13 -- scripts/common.sh@345 -- # : 1 00:01:39.796 19:08:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:01:39.796 19:08:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:39.796 19:08:13 -- scripts/common.sh@365 -- # decimal 1 00:01:39.796 19:08:13 -- scripts/common.sh@353 -- # local d=1 00:01:39.796 19:08:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:01:39.796 19:08:13 -- scripts/common.sh@355 -- # echo 1 00:01:39.796 19:08:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:01:39.796 19:08:13 -- scripts/common.sh@366 -- # decimal 2 00:01:39.796 19:08:13 -- scripts/common.sh@353 -- # local d=2 00:01:39.796 19:08:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:01:39.796 19:08:13 -- scripts/common.sh@355 -- # echo 2 00:01:39.796 19:08:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:01:39.796 19:08:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:01:39.796 19:08:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:01:39.796 19:08:13 -- scripts/common.sh@368 -- # return 0 00:01:39.796 19:08:13 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:01:39.796 19:08:13 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:01:39.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:39.796 --rc genhtml_branch_coverage=1 00:01:39.796 --rc genhtml_function_coverage=1 00:01:39.796 --rc genhtml_legend=1 00:01:39.796 --rc geninfo_all_blocks=1 00:01:39.796 --rc geninfo_unexecuted_blocks=1 00:01:39.796 00:01:39.796 ' 00:01:39.796 19:08:13 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:01:39.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:39.796 --rc genhtml_branch_coverage=1 00:01:39.796 --rc genhtml_function_coverage=1 00:01:39.796 --rc genhtml_legend=1 00:01:39.796 --rc geninfo_all_blocks=1 00:01:39.796 --rc geninfo_unexecuted_blocks=1 00:01:39.796 00:01:39.796 ' 00:01:39.796 19:08:13 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:01:39.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:39.796 --rc genhtml_branch_coverage=1 00:01:39.796 --rc genhtml_function_coverage=1 00:01:39.796 --rc genhtml_legend=1 00:01:39.796 --rc geninfo_all_blocks=1 00:01:39.796 --rc geninfo_unexecuted_blocks=1 00:01:39.796 00:01:39.796 ' 00:01:39.796 19:08:13 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:01:39.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:01:39.796 --rc genhtml_branch_coverage=1 00:01:39.796 --rc genhtml_function_coverage=1 00:01:39.796 --rc genhtml_legend=1 00:01:39.796 --rc geninfo_all_blocks=1 00:01:39.796 --rc geninfo_unexecuted_blocks=1 00:01:39.796 00:01:39.796 ' 00:01:39.796 19:08:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:39.796 19:08:13 -- nvmf/common.sh@7 -- # uname -s 00:01:39.796 19:08:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:39.796 19:08:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:39.796 19:08:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:39.796 19:08:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:39.796 19:08:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:39.796 19:08:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:39.796 19:08:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:39.796 19:08:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:39.796 19:08:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:39.796 19:08:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:39.796 19:08:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:01:39.796 19:08:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:01:39.796 19:08:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:39.796 19:08:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:39.796 19:08:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:39.796 19:08:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:39.796 19:08:13 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:39.796 19:08:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:01:39.796 19:08:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:39.796 19:08:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:39.796 19:08:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:39.796 19:08:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.796 19:08:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.796 19:08:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.796 19:08:13 -- paths/export.sh@5 -- # export PATH 00:01:39.796 19:08:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:39.796 19:08:13 -- nvmf/common.sh@51 -- # : 0 00:01:39.796 19:08:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:01:39.796 19:08:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:01:39.796 19:08:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:39.796 19:08:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:39.797 19:08:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:39.797 19:08:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:01:39.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:01:39.797 19:08:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:01:39.797 19:08:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:01:39.797 19:08:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:01:39.797 19:08:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:39.797 19:08:13 -- spdk/autotest.sh@32 -- # uname -s 00:01:39.797 19:08:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:39.797 19:08:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:39.797 19:08:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:39.797 19:08:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:39.797 19:08:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:39.797 19:08:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:39.797 19:08:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:39.797 19:08:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:39.797 19:08:13 -- spdk/autotest.sh@48 -- # udevadm_pid=3436575 00:01:39.797 19:08:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:39.797 19:08:13 -- pm/common@17 -- # local monitor 00:01:39.797 19:08:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.797 19:08:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.797 19:08:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.797 19:08:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.797 19:08:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:39.797 19:08:13 -- pm/common@25 -- # sleep 1 00:01:39.797 19:08:13 -- pm/common@21 -- # date +%s 00:01:39.797 19:08:13 -- pm/common@21 -- # date +%s 00:01:39.797 19:08:13 -- pm/common@21 -- # date +%s 00:01:39.797 19:08:13 -- pm/common@21 -- # date +%s 00:01:39.797 19:08:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732644493 00:01:40.055 19:08:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732644493 00:01:40.055 19:08:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732644493 00:01:40.055 19:08:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732644493 00:01:40.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732644493_collect-cpu-load.pm.log 00:01:40.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732644493_collect-vmstat.pm.log 00:01:40.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732644493_collect-cpu-temp.pm.log 00:01:40.055 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732644493_collect-bmc-pm.bmc.pm.log 00:01:40.992 19:08:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:40.992 19:08:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:40.992 19:08:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:01:40.992 19:08:14 -- common/autotest_common.sh@10 -- # set +x 00:01:40.992 19:08:14 -- spdk/autotest.sh@59 -- # create_test_list 00:01:40.992 19:08:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:01:40.992 19:08:14 -- common/autotest_common.sh@10 -- # set +x 00:01:40.992 19:08:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:40.992 19:08:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.992 19:08:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.992 19:08:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:40.992 19:08:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:40.992 19:08:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:40.992 19:08:14 -- common/autotest_common.sh@1457 -- # uname 00:01:40.992 19:08:14 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:01:40.992 19:08:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:40.992 19:08:14 -- common/autotest_common.sh@1477 -- # uname 00:01:40.992 19:08:14 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:01:40.992 19:08:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:01:40.992 19:08:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:01:40.992 lcov: LCOV version 1.15 00:01:40.992 19:08:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:51.087 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:51.087 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:01.070 19:08:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:01.070 19:08:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:01.070 19:08:34 -- common/autotest_common.sh@10 -- # set +x 00:02:01.070 19:08:34 -- spdk/autotest.sh@78 -- # rm -f 00:02:01.070 19:08:34 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:02.975 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:02.975 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:02.975 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:03.234 19:08:37 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:03.234 19:08:37 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:03.234 19:08:37 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:03.234 19:08:37 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:03.234 19:08:37 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:03.234 19:08:37 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:03.234 19:08:37 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:03.234 19:08:37 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:03.234 19:08:37 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:03.234 19:08:37 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:03.234 19:08:37 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:03.234 19:08:37 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:03.234 19:08:37 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:03.234 19:08:37 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:03.234 19:08:37 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:03.494 No valid GPT data, bailing 00:02:03.494 19:08:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:03.494 19:08:37 -- scripts/common.sh@394 -- # pt= 00:02:03.494 19:08:37 -- scripts/common.sh@395 -- # return 1 00:02:03.494 19:08:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:03.494 1+0 records in 00:02:03.494 1+0 records out 00:02:03.494 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00240275 s, 436 MB/s 00:02:03.494 19:08:37 -- spdk/autotest.sh@105 -- # sync 00:02:03.494 19:08:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:03.494 19:08:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:03.494 19:08:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:08.771 19:08:41 -- spdk/autotest.sh@111 -- # uname -s 00:02:08.771 19:08:41 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:08.771 19:08:41 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:08.771 19:08:41 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:10.679 Hugepages 00:02:10.679 node hugesize free / total 00:02:10.679 node0 1048576kB 0 / 0 00:02:10.679 node0 2048kB 0 / 0 00:02:10.679 node1 1048576kB 0 / 0 00:02:10.679 node1 2048kB 0 / 0 00:02:10.679 00:02:10.679 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:10.679 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:10.679 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:10.679 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:10.679 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:10.679 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:10.679 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:10.679 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:10.679 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:10.679 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:10.679 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:10.679 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:10.679 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:10.679 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:10.679 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:10.679 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:10.679 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:10.679 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:10.939 19:08:44 -- spdk/autotest.sh@117 -- # uname -s 00:02:10.939 19:08:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:10.939 19:08:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:10.939 19:08:44 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:13.476 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:13.476 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:15.385 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:02:15.644 19:08:49 -- common/autotest_common.sh@1517 -- # sleep 1 00:02:16.583 19:08:50 -- common/autotest_common.sh@1518 -- # bdfs=() 00:02:16.583 19:08:50 -- common/autotest_common.sh@1518 -- # local bdfs 00:02:16.583 19:08:50 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:02:16.583 19:08:50 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:02:16.583 19:08:50 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:16.583 19:08:50 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:16.583 19:08:50 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:16.583 19:08:50 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:16.583 19:08:50 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:16.583 19:08:50 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:16.583 19:08:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:02:16.583 19:08:50 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:19.122 Waiting for block devices as requested 00:02:19.122 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:02:19.122 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:02:19.122 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:02:19.382 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:02:19.382 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:02:19.382 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:02:19.382 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:02:19.642 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:02:19.642 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:02:19.642 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:02:19.902 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:02:19.902 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:02:19.902 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:02:19.902 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:02:20.161 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:02:20.161 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:02:20.161 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:02:20.420 19:08:54 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:02:20.420 19:08:54 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:02:20.420 19:08:54 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:02:20.420 19:08:54 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:02:20.420 19:08:54 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:02:20.420 19:08:54 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:02:20.420 19:08:54 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:02:20.420 19:08:54 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:02:20.420 19:08:54 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:02:20.420 19:08:54 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:02:20.420 19:08:54 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:02:20.420 19:08:54 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:02:20.420 19:08:54 -- common/autotest_common.sh@1531 -- # grep oacs 00:02:20.420 19:08:54 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:02:20.420 19:08:54 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:02:20.420 19:08:54 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:02:20.420 19:08:54 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:02:20.420 19:08:54 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:02:20.420 19:08:54 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:02:20.420 19:08:54 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:02:20.420 19:08:54 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:02:20.420 19:08:54 -- common/autotest_common.sh@1543 -- # continue 00:02:20.420 19:08:54 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:02:20.420 19:08:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:20.420 19:08:54 -- common/autotest_common.sh@10 -- # set +x 00:02:20.679 19:08:54 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:02:20.679 19:08:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:20.679 19:08:54 -- common/autotest_common.sh@10 -- # set +x 00:02:20.679 19:08:54 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:23.217 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:02:23.217 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:02:23.476 19:08:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:02:23.476 19:08:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:23.476 19:08:57 -- common/autotest_common.sh@10 -- # set +x 00:02:23.476 19:08:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:02:23.476 19:08:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:02:23.476 19:08:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:02:23.476 19:08:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:02:23.476 19:08:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:02:23.476 19:08:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:02:23.476 19:08:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:02:23.476 19:08:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:02:23.476 19:08:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:02:23.476 19:08:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:02:23.476 19:08:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:02:23.476 19:08:57 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:02:23.476 19:08:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:02:23.737 19:08:57 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:02:23.737 19:08:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:02:23.737 19:08:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:02:23.737 19:08:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:02:23.737 19:08:57 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:02:23.737 19:08:57 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:02:23.737 19:08:57 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:02:23.737 19:08:57 -- common/autotest_common.sh@1572 -- # return 0 00:02:23.737 19:08:57 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:02:23.737 19:08:57 -- common/autotest_common.sh@1580 -- # return 0 00:02:23.737 19:08:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:02:23.737 19:08:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:02:23.737 19:08:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:02:23.737 19:08:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:02:23.737 19:08:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:02:23.737 19:08:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:23.737 19:08:57 -- common/autotest_common.sh@10 -- # set +x 00:02:23.737 19:08:57 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:02:23.737 19:08:57 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:02:23.737 19:08:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:23.737 19:08:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:23.737 19:08:57 -- common/autotest_common.sh@10 -- # set +x 00:02:23.737 ************************************ 00:02:23.737 START TEST env 00:02:23.737 ************************************ 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:02:23.737 * Looking for test storage... 00:02:23.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1693 -- # lcov --version 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:23.737 19:08:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:23.737 19:08:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:23.737 19:08:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:23.737 19:08:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:02:23.737 19:08:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:02:23.737 19:08:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:02:23.737 19:08:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:02:23.737 19:08:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:02:23.737 19:08:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:02:23.737 19:08:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:02:23.737 19:08:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:23.737 19:08:57 env -- scripts/common.sh@344 -- # case "$op" in 00:02:23.737 19:08:57 env -- scripts/common.sh@345 -- # : 1 00:02:23.737 19:08:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:23.737 19:08:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:23.737 19:08:57 env -- scripts/common.sh@365 -- # decimal 1 00:02:23.737 19:08:57 env -- scripts/common.sh@353 -- # local d=1 00:02:23.737 19:08:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:23.737 19:08:57 env -- scripts/common.sh@355 -- # echo 1 00:02:23.737 19:08:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:02:23.737 19:08:57 env -- scripts/common.sh@366 -- # decimal 2 00:02:23.737 19:08:57 env -- scripts/common.sh@353 -- # local d=2 00:02:23.737 19:08:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:23.737 19:08:57 env -- scripts/common.sh@355 -- # echo 2 00:02:23.737 19:08:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:02:23.737 19:08:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:23.737 19:08:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:23.737 19:08:57 env -- scripts/common.sh@368 -- # return 0 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:23.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.737 --rc genhtml_branch_coverage=1 00:02:23.737 --rc genhtml_function_coverage=1 00:02:23.737 --rc genhtml_legend=1 00:02:23.737 --rc geninfo_all_blocks=1 00:02:23.737 --rc geninfo_unexecuted_blocks=1 00:02:23.737 00:02:23.737 ' 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:23.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.737 --rc genhtml_branch_coverage=1 00:02:23.737 --rc genhtml_function_coverage=1 00:02:23.737 --rc genhtml_legend=1 00:02:23.737 --rc geninfo_all_blocks=1 00:02:23.737 --rc geninfo_unexecuted_blocks=1 00:02:23.737 00:02:23.737 ' 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:23.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.737 --rc genhtml_branch_coverage=1 00:02:23.737 --rc genhtml_function_coverage=1 00:02:23.737 --rc genhtml_legend=1 00:02:23.737 --rc geninfo_all_blocks=1 00:02:23.737 --rc geninfo_unexecuted_blocks=1 00:02:23.737 00:02:23.737 ' 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:23.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:23.737 --rc genhtml_branch_coverage=1 00:02:23.737 --rc genhtml_function_coverage=1 00:02:23.737 --rc genhtml_legend=1 00:02:23.737 --rc geninfo_all_blocks=1 00:02:23.737 --rc geninfo_unexecuted_blocks=1 00:02:23.737 00:02:23.737 ' 00:02:23.737 19:08:57 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:23.737 19:08:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:23.737 19:08:57 env -- common/autotest_common.sh@10 -- # set +x 00:02:23.737 ************************************ 00:02:23.737 START TEST env_memory 00:02:23.737 ************************************ 00:02:23.737 19:08:57 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:02:23.737 00:02:23.737 00:02:23.737 CUnit - A unit testing framework for C - Version 2.1-3 00:02:23.737 http://cunit.sourceforge.net/ 00:02:23.737 00:02:23.737 00:02:23.737 Suite: memory 00:02:23.998 Test: alloc and free memory map ...[2024-11-26 19:08:57.617465] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:02:23.998 passed 00:02:23.998 Test: mem map translation ...[2024-11-26 19:08:57.642852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:02:23.998 [2024-11-26 19:08:57.642899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:02:23.998 [2024-11-26 19:08:57.642945] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:02:23.998 [2024-11-26 19:08:57.642952] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:02:23.998 passed 00:02:23.998 Test: mem map registration ...[2024-11-26 19:08:57.698252] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:02:23.998 [2024-11-26 19:08:57.698277] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:02:23.998 passed 00:02:23.998 Test: mem map adjacent registrations ...passed 00:02:23.998 00:02:23.998 Run Summary: Type Total Ran Passed Failed Inactive 00:02:23.998 suites 1 1 n/a 0 0 00:02:23.998 tests 4 4 4 0 0 00:02:23.998 asserts 152 152 152 0 n/a 00:02:23.998 00:02:23.998 Elapsed time = 0.182 seconds 00:02:23.998 00:02:23.998 real 0m0.190s 00:02:23.998 user 0m0.182s 00:02:23.998 sys 0m0.007s 00:02:23.998 19:08:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:23.998 19:08:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:02:23.998 ************************************ 00:02:23.998 END TEST env_memory 00:02:23.998 ************************************ 00:02:23.998 19:08:57 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:02:23.998 19:08:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:23.998 19:08:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:23.998 19:08:57 env -- common/autotest_common.sh@10 -- # set +x 00:02:23.998 ************************************ 00:02:23.998 START TEST env_vtophys 00:02:23.998 ************************************ 00:02:23.998 19:08:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:02:23.998 EAL: lib.eal log level changed from notice to debug 00:02:23.998 EAL: Detected lcore 0 as core 0 on socket 0 00:02:23.998 EAL: Detected lcore 1 as core 1 on socket 0 00:02:23.998 EAL: Detected lcore 2 as core 2 on socket 0 00:02:23.998 EAL: Detected lcore 3 as core 3 on socket 0 00:02:23.998 EAL: Detected lcore 4 as core 4 on socket 0 00:02:23.998 EAL: Detected lcore 5 as core 5 on socket 0 00:02:23.998 EAL: Detected lcore 6 as core 6 on socket 0 00:02:23.998 EAL: Detected lcore 7 as core 7 on socket 0 00:02:23.998 EAL: Detected lcore 8 as core 8 on socket 0 00:02:23.998 EAL: Detected lcore 9 as core 9 on socket 0 00:02:23.998 EAL: Detected lcore 10 as core 10 on socket 0 00:02:23.998 EAL: Detected lcore 11 as core 11 on socket 0 00:02:23.998 EAL: Detected lcore 12 as core 12 on socket 0 00:02:23.998 EAL: Detected lcore 13 as core 13 on socket 0 00:02:23.998 EAL: Detected lcore 14 as core 14 on socket 0 00:02:23.998 EAL: Detected lcore 15 as core 15 on socket 0 00:02:23.998 EAL: Detected lcore 16 as core 16 on socket 0 00:02:23.998 EAL: Detected lcore 17 as core 17 on socket 0 00:02:23.998 EAL: Detected lcore 18 as core 18 on socket 0 00:02:23.998 EAL: Detected lcore 19 as core 19 on socket 0 00:02:23.998 EAL: Detected lcore 20 as core 20 on socket 0 00:02:23.998 EAL: Detected lcore 21 as core 21 on socket 0 00:02:23.998 EAL: Detected lcore 22 as core 22 on socket 0 00:02:23.998 EAL: Detected lcore 23 as core 23 on socket 0 00:02:23.998 EAL: Detected lcore 24 as core 24 on socket 0 00:02:23.998 EAL: Detected lcore 25 as core 25 on socket 0 00:02:23.998 EAL: Detected lcore 26 as core 26 on socket 0 00:02:23.998 EAL: Detected lcore 27 as core 27 on socket 0 00:02:23.998 EAL: Detected lcore 28 as core 28 on socket 0 00:02:23.998 EAL: Detected lcore 29 as core 29 on socket 0 00:02:23.998 EAL: Detected lcore 30 as core 30 on socket 0 00:02:23.998 EAL: Detected lcore 31 as core 31 on socket 0 00:02:23.998 EAL: Detected lcore 32 as core 32 on socket 0 00:02:23.998 EAL: Detected lcore 33 as core 33 on socket 0 00:02:23.998 EAL: Detected lcore 34 as core 34 on socket 0 00:02:23.998 EAL: Detected lcore 35 as core 35 on socket 0 00:02:23.998 EAL: Detected lcore 36 as core 0 on socket 1 00:02:23.998 EAL: Detected lcore 37 as core 1 on socket 1 00:02:23.998 EAL: Detected lcore 38 as core 2 on socket 1 00:02:23.999 EAL: Detected lcore 39 as core 3 on socket 1 00:02:23.999 EAL: Detected lcore 40 as core 4 on socket 1 00:02:23.999 EAL: Detected lcore 41 as core 5 on socket 1 00:02:23.999 EAL: Detected lcore 42 as core 6 on socket 1 00:02:23.999 EAL: Detected lcore 43 as core 7 on socket 1 00:02:23.999 EAL: Detected lcore 44 as core 8 on socket 1 00:02:23.999 EAL: Detected lcore 45 as core 9 on socket 1 00:02:23.999 EAL: Detected lcore 46 as core 10 on socket 1 00:02:23.999 EAL: Detected lcore 47 as core 11 on socket 1 00:02:23.999 EAL: Detected lcore 48 as core 12 on socket 1 00:02:23.999 EAL: Detected lcore 49 as core 13 on socket 1 00:02:23.999 EAL: Detected lcore 50 as core 14 on socket 1 00:02:23.999 EAL: Detected lcore 51 as core 15 on socket 1 00:02:23.999 EAL: Detected lcore 52 as core 16 on socket 1 00:02:23.999 EAL: Detected lcore 53 as core 17 on socket 1 00:02:23.999 EAL: Detected lcore 54 as core 18 on socket 1 00:02:23.999 EAL: Detected lcore 55 as core 19 on socket 1 00:02:23.999 EAL: Detected lcore 56 as core 20 on socket 1 00:02:23.999 EAL: Detected lcore 57 as core 21 on socket 1 00:02:23.999 EAL: Detected lcore 58 as core 22 on socket 1 00:02:23.999 EAL: Detected lcore 59 as core 23 on socket 1 00:02:23.999 EAL: Detected lcore 60 as core 24 on socket 1 00:02:23.999 EAL: Detected lcore 61 as core 25 on socket 1 00:02:23.999 EAL: Detected lcore 62 as core 26 on socket 1 00:02:23.999 EAL: Detected lcore 63 as core 27 on socket 1 00:02:23.999 EAL: Detected lcore 64 as core 28 on socket 1 00:02:23.999 EAL: Detected lcore 65 as core 29 on socket 1 00:02:23.999 EAL: Detected lcore 66 as core 30 on socket 1 00:02:23.999 EAL: Detected lcore 67 as core 31 on socket 1 00:02:23.999 EAL: Detected lcore 68 as core 32 on socket 1 00:02:23.999 EAL: Detected lcore 69 as core 33 on socket 1 00:02:23.999 EAL: Detected lcore 70 as core 34 on socket 1 00:02:23.999 EAL: Detected lcore 71 as core 35 on socket 1 00:02:23.999 EAL: Detected lcore 72 as core 0 on socket 0 00:02:23.999 EAL: Detected lcore 73 as core 1 on socket 0 00:02:23.999 EAL: Detected lcore 74 as core 2 on socket 0 00:02:23.999 EAL: Detected lcore 75 as core 3 on socket 0 00:02:23.999 EAL: Detected lcore 76 as core 4 on socket 0 00:02:23.999 EAL: Detected lcore 77 as core 5 on socket 0 00:02:23.999 EAL: Detected lcore 78 as core 6 on socket 0 00:02:23.999 EAL: Detected lcore 79 as core 7 on socket 0 00:02:23.999 EAL: Detected lcore 80 as core 8 on socket 0 00:02:23.999 EAL: Detected lcore 81 as core 9 on socket 0 00:02:23.999 EAL: Detected lcore 82 as core 10 on socket 0 00:02:23.999 EAL: Detected lcore 83 as core 11 on socket 0 00:02:23.999 EAL: Detected lcore 84 as core 12 on socket 0 00:02:23.999 EAL: Detected lcore 85 as core 13 on socket 0 00:02:23.999 EAL: Detected lcore 86 as core 14 on socket 0 00:02:23.999 EAL: Detected lcore 87 as core 15 on socket 0 00:02:23.999 EAL: Detected lcore 88 as core 16 on socket 0 00:02:23.999 EAL: Detected lcore 89 as core 17 on socket 0 00:02:23.999 EAL: Detected lcore 90 as core 18 on socket 0 00:02:23.999 EAL: Detected lcore 91 as core 19 on socket 0 00:02:23.999 EAL: Detected lcore 92 as core 20 on socket 0 00:02:23.999 EAL: Detected lcore 93 as core 21 on socket 0 00:02:23.999 EAL: Detected lcore 94 as core 22 on socket 0 00:02:23.999 EAL: Detected lcore 95 as core 23 on socket 0 00:02:23.999 EAL: Detected lcore 96 as core 24 on socket 0 00:02:23.999 EAL: Detected lcore 97 as core 25 on socket 0 00:02:23.999 EAL: Detected lcore 98 as core 26 on socket 0 00:02:23.999 EAL: Detected lcore 99 as core 27 on socket 0 00:02:23.999 EAL: Detected lcore 100 as core 28 on socket 0 00:02:23.999 EAL: Detected lcore 101 as core 29 on socket 0 00:02:23.999 EAL: Detected lcore 102 as core 30 on socket 0 00:02:23.999 EAL: Detected lcore 103 as core 31 on socket 0 00:02:23.999 EAL: Detected lcore 104 as core 32 on socket 0 00:02:23.999 EAL: Detected lcore 105 as core 33 on socket 0 00:02:23.999 EAL: Detected lcore 106 as core 34 on socket 0 00:02:23.999 EAL: Detected lcore 107 as core 35 on socket 0 00:02:23.999 EAL: Detected lcore 108 as core 0 on socket 1 00:02:23.999 EAL: Detected lcore 109 as core 1 on socket 1 00:02:23.999 EAL: Detected lcore 110 as core 2 on socket 1 00:02:23.999 EAL: Detected lcore 111 as core 3 on socket 1 00:02:23.999 EAL: Detected lcore 112 as core 4 on socket 1 00:02:23.999 EAL: Detected lcore 113 as core 5 on socket 1 00:02:23.999 EAL: Detected lcore 114 as core 6 on socket 1 00:02:23.999 EAL: Detected lcore 115 as core 7 on socket 1 00:02:23.999 EAL: Detected lcore 116 as core 8 on socket 1 00:02:23.999 EAL: Detected lcore 117 as core 9 on socket 1 00:02:23.999 EAL: Detected lcore 118 as core 10 on socket 1 00:02:23.999 EAL: Detected lcore 119 as core 11 on socket 1 00:02:23.999 EAL: Detected lcore 120 as core 12 on socket 1 00:02:23.999 EAL: Detected lcore 121 as core 13 on socket 1 00:02:23.999 EAL: Detected lcore 122 as core 14 on socket 1 00:02:23.999 EAL: Detected lcore 123 as core 15 on socket 1 00:02:23.999 EAL: Detected lcore 124 as core 16 on socket 1 00:02:23.999 EAL: Detected lcore 125 as core 17 on socket 1 00:02:23.999 EAL: Detected lcore 126 as core 18 on socket 1 00:02:23.999 EAL: Detected lcore 127 as core 19 on socket 1 00:02:23.999 EAL: Skipped lcore 128 as core 20 on socket 1 00:02:23.999 EAL: Skipped lcore 129 as core 21 on socket 1 00:02:23.999 EAL: Skipped lcore 130 as core 22 on socket 1 00:02:23.999 EAL: Skipped lcore 131 as core 23 on socket 1 00:02:23.999 EAL: Skipped lcore 132 as core 24 on socket 1 00:02:23.999 EAL: Skipped lcore 133 as core 25 on socket 1 00:02:23.999 EAL: Skipped lcore 134 as core 26 on socket 1 00:02:23.999 EAL: Skipped lcore 135 as core 27 on socket 1 00:02:23.999 EAL: Skipped lcore 136 as core 28 on socket 1 00:02:23.999 EAL: Skipped lcore 137 as core 29 on socket 1 00:02:23.999 EAL: Skipped lcore 138 as core 30 on socket 1 00:02:23.999 EAL: Skipped lcore 139 as core 31 on socket 1 00:02:23.999 EAL: Skipped lcore 140 as core 32 on socket 1 00:02:23.999 EAL: Skipped lcore 141 as core 33 on socket 1 00:02:23.999 EAL: Skipped lcore 142 as core 34 on socket 1 00:02:23.999 EAL: Skipped lcore 143 as core 35 on socket 1 00:02:23.999 EAL: Maximum logical cores by configuration: 128 00:02:23.999 EAL: Detected CPU lcores: 128 00:02:23.999 EAL: Detected NUMA nodes: 2 00:02:23.999 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:02:23.999 EAL: Detected shared linkage of DPDK 00:02:23.999 EAL: No shared files mode enabled, IPC will be disabled 00:02:24.259 EAL: Bus pci wants IOVA as 'DC' 00:02:24.259 EAL: Buses did not request a specific IOVA mode. 00:02:24.259 EAL: IOMMU is available, selecting IOVA as VA mode. 00:02:24.259 EAL: Selected IOVA mode 'VA' 00:02:24.259 EAL: Probing VFIO support... 00:02:24.259 EAL: IOMMU type 1 (Type 1) is supported 00:02:24.259 EAL: IOMMU type 7 (sPAPR) is not supported 00:02:24.259 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:02:24.259 EAL: VFIO support initialized 00:02:24.259 EAL: Ask a virtual area of 0x2e000 bytes 00:02:24.259 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:02:24.259 EAL: Setting up physically contiguous memory... 00:02:24.259 EAL: Setting maximum number of open files to 524288 00:02:24.259 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:02:24.259 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:02:24.259 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:02:24.259 EAL: Ask a virtual area of 0x61000 bytes 00:02:24.259 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:02:24.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:24.259 EAL: Ask a virtual area of 0x400000000 bytes 00:02:24.259 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:02:24.259 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:02:24.259 EAL: Ask a virtual area of 0x61000 bytes 00:02:24.259 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:02:24.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:24.259 EAL: Ask a virtual area of 0x400000000 bytes 00:02:24.259 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:02:24.259 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:02:24.259 EAL: Ask a virtual area of 0x61000 bytes 00:02:24.259 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:02:24.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:24.259 EAL: Ask a virtual area of 0x400000000 bytes 00:02:24.259 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:02:24.259 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:02:24.259 EAL: Ask a virtual area of 0x61000 bytes 00:02:24.259 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:02:24.259 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:02:24.259 EAL: Ask a virtual area of 0x400000000 bytes 00:02:24.259 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:02:24.259 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:02:24.259 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:02:24.259 EAL: Ask a virtual area of 0x61000 bytes 00:02:24.259 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:02:24.259 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:24.259 EAL: Ask a virtual area of 0x400000000 bytes 00:02:24.259 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:02:24.259 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:02:24.259 EAL: Ask a virtual area of 0x61000 bytes 00:02:24.259 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:02:24.259 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:24.259 EAL: Ask a virtual area of 0x400000000 bytes 00:02:24.259 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:02:24.259 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:02:24.259 EAL: Ask a virtual area of 0x61000 bytes 00:02:24.259 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:02:24.259 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:24.259 EAL: Ask a virtual area of 0x400000000 bytes 00:02:24.259 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:02:24.260 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:02:24.260 EAL: Ask a virtual area of 0x61000 bytes 00:02:24.260 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:02:24.260 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:02:24.260 EAL: Ask a virtual area of 0x400000000 bytes 00:02:24.260 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:02:24.260 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:02:24.260 EAL: Hugepages will be freed exactly as allocated. 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: TSC frequency is ~2400000 KHz 00:02:24.260 EAL: Main lcore 0 is ready (tid=7fc819405a00;cpuset=[0]) 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 0 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 2MB 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: No PCI address specified using 'addr=' in: bus=pci 00:02:24.260 EAL: Mem event callback 'spdk:(nil)' registered 00:02:24.260 00:02:24.260 00:02:24.260 CUnit - A unit testing framework for C - Version 2.1-3 00:02:24.260 http://cunit.sourceforge.net/ 00:02:24.260 00:02:24.260 00:02:24.260 Suite: components_suite 00:02:24.260 Test: vtophys_malloc_test ...passed 00:02:24.260 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 4 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 4MB 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was shrunk by 4MB 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 4 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 6MB 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was shrunk by 6MB 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 4 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 10MB 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was shrunk by 10MB 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 4 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 18MB 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was shrunk by 18MB 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 4 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 34MB 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was shrunk by 34MB 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 4 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 66MB 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was shrunk by 66MB 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 4 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 130MB 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was shrunk by 130MB 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.260 EAL: Restoring previous memory policy: 4 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was expanded by 258MB 00:02:24.260 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.260 EAL: request: mp_malloc_sync 00:02:24.260 EAL: No shared files mode enabled, IPC is disabled 00:02:24.260 EAL: Heap on socket 0 was shrunk by 258MB 00:02:24.260 EAL: Trying to obtain current memory policy. 00:02:24.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.519 EAL: Restoring previous memory policy: 4 00:02:24.519 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.519 EAL: request: mp_malloc_sync 00:02:24.519 EAL: No shared files mode enabled, IPC is disabled 00:02:24.519 EAL: Heap on socket 0 was expanded by 514MB 00:02:24.519 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.519 EAL: request: mp_malloc_sync 00:02:24.519 EAL: No shared files mode enabled, IPC is disabled 00:02:24.519 EAL: Heap on socket 0 was shrunk by 514MB 00:02:24.519 EAL: Trying to obtain current memory policy. 00:02:24.519 EAL: Setting policy MPOL_PREFERRED for socket 0 00:02:24.779 EAL: Restoring previous memory policy: 4 00:02:24.779 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.779 EAL: request: mp_malloc_sync 00:02:24.779 EAL: No shared files mode enabled, IPC is disabled 00:02:24.779 EAL: Heap on socket 0 was expanded by 1026MB 00:02:24.779 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.779 EAL: request: mp_malloc_sync 00:02:24.779 EAL: No shared files mode enabled, IPC is disabled 00:02:24.779 EAL: Heap on socket 0 was shrunk by 1026MB 00:02:24.779 passed 00:02:24.779 00:02:24.779 Run Summary: Type Total Ran Passed Failed Inactive 00:02:24.779 suites 1 1 n/a 0 0 00:02:24.779 tests 2 2 2 0 0 00:02:24.779 asserts 497 497 497 0 n/a 00:02:24.779 00:02:24.779 Elapsed time = 0.687 seconds 00:02:24.779 EAL: Calling mem event callback 'spdk:(nil)' 00:02:24.779 EAL: request: mp_malloc_sync 00:02:24.779 EAL: No shared files mode enabled, IPC is disabled 00:02:24.779 EAL: Heap on socket 0 was shrunk by 2MB 00:02:24.779 EAL: No shared files mode enabled, IPC is disabled 00:02:24.779 EAL: No shared files mode enabled, IPC is disabled 00:02:24.779 EAL: No shared files mode enabled, IPC is disabled 00:02:24.779 00:02:24.779 real 0m0.815s 00:02:24.779 user 0m0.427s 00:02:24.779 sys 0m0.357s 00:02:24.779 19:08:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:24.779 19:08:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:02:24.779 ************************************ 00:02:24.779 END TEST env_vtophys 00:02:24.779 ************************************ 00:02:25.039 19:08:58 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:02:25.039 19:08:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:25.039 19:08:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:25.039 19:08:58 env -- common/autotest_common.sh@10 -- # set +x 00:02:25.039 ************************************ 00:02:25.039 START TEST env_pci 00:02:25.039 ************************************ 00:02:25.039 19:08:58 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:02:25.039 00:02:25.039 00:02:25.039 CUnit - A unit testing framework for C - Version 2.1-3 00:02:25.039 http://cunit.sourceforge.net/ 00:02:25.039 00:02:25.039 00:02:25.039 Suite: pci 00:02:25.039 Test: pci_hook ...[2024-11-26 19:08:58.693714] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3453695 has claimed it 00:02:25.039 EAL: Cannot find device (10000:00:01.0) 00:02:25.039 EAL: Failed to attach device on primary process 00:02:25.039 passed 00:02:25.039 00:02:25.039 Run Summary: Type Total Ran Passed Failed Inactive 00:02:25.039 suites 1 1 n/a 0 0 00:02:25.039 tests 1 1 1 0 0 00:02:25.039 asserts 25 25 25 0 n/a 00:02:25.039 00:02:25.039 Elapsed time = 0.025 seconds 00:02:25.039 00:02:25.039 real 0m0.036s 00:02:25.039 user 0m0.009s 00:02:25.039 sys 0m0.026s 00:02:25.039 19:08:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:25.039 19:08:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:02:25.039 ************************************ 00:02:25.039 END TEST env_pci 00:02:25.039 ************************************ 00:02:25.039 19:08:58 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:02:25.039 19:08:58 env -- env/env.sh@15 -- # uname 00:02:25.039 19:08:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:02:25.039 19:08:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:02:25.039 19:08:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:02:25.039 19:08:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:02:25.039 19:08:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:25.039 19:08:58 env -- common/autotest_common.sh@10 -- # set +x 00:02:25.039 ************************************ 00:02:25.039 START TEST env_dpdk_post_init 00:02:25.039 ************************************ 00:02:25.039 19:08:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:02:25.039 EAL: Detected CPU lcores: 128 00:02:25.039 EAL: Detected NUMA nodes: 2 00:02:25.039 EAL: Detected shared linkage of DPDK 00:02:25.039 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:02:25.039 EAL: Selected IOVA mode 'VA' 00:02:25.039 EAL: VFIO support initialized 00:02:25.039 TELEMETRY: No legacy callbacks, legacy socket not created 00:02:25.039 EAL: Using IOMMU type 1 (Type 1) 00:02:25.298 EAL: Ignore mapping IO port bar(1) 00:02:25.298 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:02:25.556 EAL: Ignore mapping IO port bar(1) 00:02:25.556 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:02:25.814 EAL: Ignore mapping IO port bar(1) 00:02:25.814 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:02:26.072 EAL: Ignore mapping IO port bar(1) 00:02:26.072 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:02:26.072 EAL: Ignore mapping IO port bar(1) 00:02:26.330 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:02:26.330 EAL: Ignore mapping IO port bar(1) 00:02:26.587 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:02:26.587 EAL: Ignore mapping IO port bar(1) 00:02:26.846 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:02:26.846 EAL: Ignore mapping IO port bar(1) 00:02:26.846 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:02:27.104 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:02:27.362 EAL: Ignore mapping IO port bar(1) 00:02:27.362 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:02:27.620 EAL: Ignore mapping IO port bar(1) 00:02:27.620 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:02:27.877 EAL: Ignore mapping IO port bar(1) 00:02:27.877 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:02:27.877 EAL: Ignore mapping IO port bar(1) 00:02:28.136 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:02:28.136 EAL: Ignore mapping IO port bar(1) 00:02:28.395 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:02:28.395 EAL: Ignore mapping IO port bar(1) 00:02:28.654 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:02:28.654 EAL: Ignore mapping IO port bar(1) 00:02:28.654 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:02:28.913 EAL: Ignore mapping IO port bar(1) 00:02:28.913 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:02:28.913 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:02:28.913 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:02:29.172 Starting DPDK initialization... 00:02:29.172 Starting SPDK post initialization... 00:02:29.172 SPDK NVMe probe 00:02:29.172 Attaching to 0000:65:00.0 00:02:29.172 Attached to 0000:65:00.0 00:02:29.172 Cleaning up... 00:02:31.079 00:02:31.079 real 0m5.729s 00:02:31.079 user 0m0.095s 00:02:31.079 sys 0m0.185s 00:02:31.079 19:09:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:31.079 19:09:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:02:31.079 ************************************ 00:02:31.079 END TEST env_dpdk_post_init 00:02:31.079 ************************************ 00:02:31.079 19:09:04 env -- env/env.sh@26 -- # uname 00:02:31.079 19:09:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:02:31.079 19:09:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:02:31.079 19:09:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:31.079 19:09:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:31.079 19:09:04 env -- common/autotest_common.sh@10 -- # set +x 00:02:31.079 ************************************ 00:02:31.079 START TEST env_mem_callbacks 00:02:31.079 ************************************ 00:02:31.079 19:09:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:02:31.079 EAL: Detected CPU lcores: 128 00:02:31.079 EAL: Detected NUMA nodes: 2 00:02:31.079 EAL: Detected shared linkage of DPDK 00:02:31.079 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:02:31.079 EAL: Selected IOVA mode 'VA' 00:02:31.079 EAL: VFIO support initialized 00:02:31.079 TELEMETRY: No legacy callbacks, legacy socket not created 00:02:31.079 00:02:31.079 00:02:31.079 CUnit - A unit testing framework for C - Version 2.1-3 00:02:31.079 http://cunit.sourceforge.net/ 00:02:31.079 00:02:31.079 00:02:31.079 Suite: memory 00:02:31.079 Test: test ... 00:02:31.079 register 0x200000200000 2097152 00:02:31.079 malloc 3145728 00:02:31.079 register 0x200000400000 4194304 00:02:31.079 buf 0x200000500000 len 3145728 PASSED 00:02:31.079 malloc 64 00:02:31.079 buf 0x2000004fff40 len 64 PASSED 00:02:31.079 malloc 4194304 00:02:31.079 register 0x200000800000 6291456 00:02:31.079 buf 0x200000a00000 len 4194304 PASSED 00:02:31.079 free 0x200000500000 3145728 00:02:31.079 free 0x2000004fff40 64 00:02:31.079 unregister 0x200000400000 4194304 PASSED 00:02:31.079 free 0x200000a00000 4194304 00:02:31.079 unregister 0x200000800000 6291456 PASSED 00:02:31.079 malloc 8388608 00:02:31.079 register 0x200000400000 10485760 00:02:31.079 buf 0x200000600000 len 8388608 PASSED 00:02:31.079 free 0x200000600000 8388608 00:02:31.079 unregister 0x200000400000 10485760 PASSED 00:02:31.079 passed 00:02:31.079 00:02:31.079 Run Summary: Type Total Ran Passed Failed Inactive 00:02:31.079 suites 1 1 n/a 0 0 00:02:31.079 tests 1 1 1 0 0 00:02:31.079 asserts 15 15 15 0 n/a 00:02:31.079 00:02:31.079 Elapsed time = 0.008 seconds 00:02:31.079 00:02:31.079 real 0m0.053s 00:02:31.080 user 0m0.017s 00:02:31.080 sys 0m0.036s 00:02:31.080 19:09:04 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:31.080 19:09:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:02:31.080 ************************************ 00:02:31.080 END TEST env_mem_callbacks 00:02:31.080 ************************************ 00:02:31.080 00:02:31.080 real 0m7.213s 00:02:31.080 user 0m0.893s 00:02:31.080 sys 0m0.859s 00:02:31.080 19:09:04 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:31.080 19:09:04 env -- common/autotest_common.sh@10 -- # set +x 00:02:31.080 ************************************ 00:02:31.080 END TEST env 00:02:31.080 ************************************ 00:02:31.080 19:09:04 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:02:31.080 19:09:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:31.080 19:09:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:31.080 19:09:04 -- common/autotest_common.sh@10 -- # set +x 00:02:31.080 ************************************ 00:02:31.080 START TEST rpc 00:02:31.080 ************************************ 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:02:31.080 * Looking for test storage... 00:02:31.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:31.080 19:09:04 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:31.080 19:09:04 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:31.080 19:09:04 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:31.080 19:09:04 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:02:31.080 19:09:04 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:02:31.080 19:09:04 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:02:31.080 19:09:04 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:02:31.080 19:09:04 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:02:31.080 19:09:04 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:02:31.080 19:09:04 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:02:31.080 19:09:04 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:31.080 19:09:04 rpc -- scripts/common.sh@344 -- # case "$op" in 00:02:31.080 19:09:04 rpc -- scripts/common.sh@345 -- # : 1 00:02:31.080 19:09:04 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:31.080 19:09:04 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:31.080 19:09:04 rpc -- scripts/common.sh@365 -- # decimal 1 00:02:31.080 19:09:04 rpc -- scripts/common.sh@353 -- # local d=1 00:02:31.080 19:09:04 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:31.080 19:09:04 rpc -- scripts/common.sh@355 -- # echo 1 00:02:31.080 19:09:04 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:02:31.080 19:09:04 rpc -- scripts/common.sh@366 -- # decimal 2 00:02:31.080 19:09:04 rpc -- scripts/common.sh@353 -- # local d=2 00:02:31.080 19:09:04 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:31.080 19:09:04 rpc -- scripts/common.sh@355 -- # echo 2 00:02:31.080 19:09:04 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:02:31.080 19:09:04 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:31.080 19:09:04 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:31.080 19:09:04 rpc -- scripts/common.sh@368 -- # return 0 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.080 --rc genhtml_branch_coverage=1 00:02:31.080 --rc genhtml_function_coverage=1 00:02:31.080 --rc genhtml_legend=1 00:02:31.080 --rc geninfo_all_blocks=1 00:02:31.080 --rc geninfo_unexecuted_blocks=1 00:02:31.080 00:02:31.080 ' 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.080 --rc genhtml_branch_coverage=1 00:02:31.080 --rc genhtml_function_coverage=1 00:02:31.080 --rc genhtml_legend=1 00:02:31.080 --rc geninfo_all_blocks=1 00:02:31.080 --rc geninfo_unexecuted_blocks=1 00:02:31.080 00:02:31.080 ' 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.080 --rc genhtml_branch_coverage=1 00:02:31.080 --rc genhtml_function_coverage=1 00:02:31.080 --rc genhtml_legend=1 00:02:31.080 --rc geninfo_all_blocks=1 00:02:31.080 --rc geninfo_unexecuted_blocks=1 00:02:31.080 00:02:31.080 ' 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:31.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:31.080 --rc genhtml_branch_coverage=1 00:02:31.080 --rc genhtml_function_coverage=1 00:02:31.080 --rc genhtml_legend=1 00:02:31.080 --rc geninfo_all_blocks=1 00:02:31.080 --rc geninfo_unexecuted_blocks=1 00:02:31.080 00:02:31.080 ' 00:02:31.080 19:09:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3455264 00:02:31.080 19:09:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:31.080 19:09:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3455264 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@835 -- # '[' -z 3455264 ']' 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:31.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:31.080 19:09:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:31.080 19:09:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:02:31.080 [2024-11-26 19:09:04.874163] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:31.080 [2024-11-26 19:09:04.874232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3455264 ] 00:02:31.339 [2024-11-26 19:09:04.958591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:31.339 [2024-11-26 19:09:05.010679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:02:31.339 [2024-11-26 19:09:05.010732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3455264' to capture a snapshot of events at runtime. 00:02:31.339 [2024-11-26 19:09:05.010741] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:02:31.339 [2024-11-26 19:09:05.010749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:02:31.339 [2024-11-26 19:09:05.010755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3455264 for offline analysis/debug. 00:02:31.340 [2024-11-26 19:09:05.011571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:31.907 19:09:05 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:31.907 19:09:05 rpc -- common/autotest_common.sh@868 -- # return 0 00:02:31.907 19:09:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:31.907 19:09:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:31.907 19:09:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:02:31.907 19:09:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:02:31.907 19:09:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:31.907 19:09:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:31.907 19:09:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:31.907 ************************************ 00:02:31.907 START TEST rpc_integrity 00:02:31.907 ************************************ 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:02:31.907 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:31.907 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:02:31.907 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:02:31.907 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:02:31.907 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:31.907 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:02:31.907 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:31.907 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:31.907 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:02:31.907 { 00:02:31.907 "name": "Malloc0", 00:02:31.907 "aliases": [ 00:02:31.907 "3f637efe-efa7-46b0-ab84-b5099f59c58c" 00:02:31.907 ], 00:02:31.907 "product_name": "Malloc disk", 00:02:31.907 "block_size": 512, 00:02:31.907 "num_blocks": 16384, 00:02:31.907 "uuid": "3f637efe-efa7-46b0-ab84-b5099f59c58c", 00:02:31.907 "assigned_rate_limits": { 00:02:31.907 "rw_ios_per_sec": 0, 00:02:31.907 "rw_mbytes_per_sec": 0, 00:02:31.907 "r_mbytes_per_sec": 0, 00:02:31.907 "w_mbytes_per_sec": 0 00:02:31.907 }, 00:02:31.907 "claimed": false, 00:02:31.907 "zoned": false, 00:02:31.907 "supported_io_types": { 00:02:31.907 "read": true, 00:02:31.907 "write": true, 00:02:31.907 "unmap": true, 00:02:31.907 "flush": true, 00:02:31.907 "reset": true, 00:02:31.907 "nvme_admin": false, 00:02:31.907 "nvme_io": false, 00:02:31.907 "nvme_io_md": false, 00:02:31.907 "write_zeroes": true, 00:02:31.907 "zcopy": true, 00:02:31.907 "get_zone_info": false, 00:02:31.907 "zone_management": false, 00:02:31.907 "zone_append": false, 00:02:31.907 "compare": false, 00:02:31.907 "compare_and_write": false, 00:02:31.907 "abort": true, 00:02:31.907 "seek_hole": false, 00:02:31.907 "seek_data": false, 00:02:31.907 "copy": true, 00:02:31.907 "nvme_iov_md": false 00:02:31.907 }, 00:02:31.907 "memory_domains": [ 00:02:31.907 { 00:02:31.907 "dma_device_id": "system", 00:02:31.907 "dma_device_type": 1 00:02:31.907 }, 00:02:31.908 { 00:02:31.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:31.908 "dma_device_type": 2 00:02:31.908 } 00:02:31.908 ], 00:02:31.908 "driver_specific": {} 00:02:31.908 } 00:02:31.908 ]' 00:02:31.908 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 [2024-11-26 19:09:05.793342] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:02:32.166 [2024-11-26 19:09:05.793387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:02:32.166 [2024-11-26 19:09:05.793404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1725580 00:02:32.166 [2024-11-26 19:09:05.793412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:02:32.166 [2024-11-26 19:09:05.795003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:02:32.166 [2024-11-26 19:09:05.795040] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:02:32.166 Passthru0 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:02:32.166 { 00:02:32.166 "name": "Malloc0", 00:02:32.166 "aliases": [ 00:02:32.166 "3f637efe-efa7-46b0-ab84-b5099f59c58c" 00:02:32.166 ], 00:02:32.166 "product_name": "Malloc disk", 00:02:32.166 "block_size": 512, 00:02:32.166 "num_blocks": 16384, 00:02:32.166 "uuid": "3f637efe-efa7-46b0-ab84-b5099f59c58c", 00:02:32.166 "assigned_rate_limits": { 00:02:32.166 "rw_ios_per_sec": 0, 00:02:32.166 "rw_mbytes_per_sec": 0, 00:02:32.166 "r_mbytes_per_sec": 0, 00:02:32.166 "w_mbytes_per_sec": 0 00:02:32.166 }, 00:02:32.166 "claimed": true, 00:02:32.166 "claim_type": "exclusive_write", 00:02:32.166 "zoned": false, 00:02:32.166 "supported_io_types": { 00:02:32.166 "read": true, 00:02:32.166 "write": true, 00:02:32.166 "unmap": true, 00:02:32.166 "flush": true, 00:02:32.166 "reset": true, 00:02:32.166 "nvme_admin": false, 00:02:32.166 "nvme_io": false, 00:02:32.166 "nvme_io_md": false, 00:02:32.166 "write_zeroes": true, 00:02:32.166 "zcopy": true, 00:02:32.166 "get_zone_info": false, 00:02:32.166 "zone_management": false, 00:02:32.166 "zone_append": false, 00:02:32.166 "compare": false, 00:02:32.166 "compare_and_write": false, 00:02:32.166 "abort": true, 00:02:32.166 "seek_hole": false, 00:02:32.166 "seek_data": false, 00:02:32.166 "copy": true, 00:02:32.166 "nvme_iov_md": false 00:02:32.166 }, 00:02:32.166 "memory_domains": [ 00:02:32.166 { 00:02:32.166 "dma_device_id": "system", 00:02:32.166 "dma_device_type": 1 00:02:32.166 }, 00:02:32.166 { 00:02:32.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:32.166 "dma_device_type": 2 00:02:32.166 } 00:02:32.166 ], 00:02:32.166 "driver_specific": {} 00:02:32.166 }, 00:02:32.166 { 00:02:32.166 "name": "Passthru0", 00:02:32.166 "aliases": [ 00:02:32.166 "b2f9dc86-4fb9-5a6b-a061-7c034a3307b6" 00:02:32.166 ], 00:02:32.166 "product_name": "passthru", 00:02:32.166 "block_size": 512, 00:02:32.166 "num_blocks": 16384, 00:02:32.166 "uuid": "b2f9dc86-4fb9-5a6b-a061-7c034a3307b6", 00:02:32.166 "assigned_rate_limits": { 00:02:32.166 "rw_ios_per_sec": 0, 00:02:32.166 "rw_mbytes_per_sec": 0, 00:02:32.166 "r_mbytes_per_sec": 0, 00:02:32.166 "w_mbytes_per_sec": 0 00:02:32.166 }, 00:02:32.166 "claimed": false, 00:02:32.166 "zoned": false, 00:02:32.166 "supported_io_types": { 00:02:32.166 "read": true, 00:02:32.166 "write": true, 00:02:32.166 "unmap": true, 00:02:32.166 "flush": true, 00:02:32.166 "reset": true, 00:02:32.166 "nvme_admin": false, 00:02:32.166 "nvme_io": false, 00:02:32.166 "nvme_io_md": false, 00:02:32.166 "write_zeroes": true, 00:02:32.166 "zcopy": true, 00:02:32.166 "get_zone_info": false, 00:02:32.166 "zone_management": false, 00:02:32.166 "zone_append": false, 00:02:32.166 "compare": false, 00:02:32.166 "compare_and_write": false, 00:02:32.166 "abort": true, 00:02:32.166 "seek_hole": false, 00:02:32.166 "seek_data": false, 00:02:32.166 "copy": true, 00:02:32.166 "nvme_iov_md": false 00:02:32.166 }, 00:02:32.166 "memory_domains": [ 00:02:32.166 { 00:02:32.166 "dma_device_id": "system", 00:02:32.166 "dma_device_type": 1 00:02:32.166 }, 00:02:32.166 { 00:02:32.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:32.166 "dma_device_type": 2 00:02:32.166 } 00:02:32.166 ], 00:02:32.166 "driver_specific": { 00:02:32.166 "passthru": { 00:02:32.166 "name": "Passthru0", 00:02:32.166 "base_bdev_name": "Malloc0" 00:02:32.166 } 00:02:32.166 } 00:02:32.166 } 00:02:32.166 ]' 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:02:32.166 19:09:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:02:32.166 00:02:32.166 real 0m0.205s 00:02:32.166 user 0m0.114s 00:02:32.166 sys 0m0.029s 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:32.166 19:09:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 ************************************ 00:02:32.166 END TEST rpc_integrity 00:02:32.166 ************************************ 00:02:32.166 19:09:05 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:02:32.166 19:09:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:32.166 19:09:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:32.166 19:09:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 ************************************ 00:02:32.166 START TEST rpc_plugins 00:02:32.166 ************************************ 00:02:32.166 19:09:05 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:02:32.166 19:09:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:02:32.166 19:09:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.166 19:09:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 19:09:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.166 19:09:05 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:02:32.166 19:09:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:02:32.166 19:09:05 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.166 19:09:05 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:32.166 19:09:05 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.166 19:09:05 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:02:32.166 { 00:02:32.166 "name": "Malloc1", 00:02:32.166 "aliases": [ 00:02:32.166 "5567f9f5-dc1c-4aff-a343-1676ab0a23a5" 00:02:32.166 ], 00:02:32.166 "product_name": "Malloc disk", 00:02:32.166 "block_size": 4096, 00:02:32.166 "num_blocks": 256, 00:02:32.166 "uuid": "5567f9f5-dc1c-4aff-a343-1676ab0a23a5", 00:02:32.166 "assigned_rate_limits": { 00:02:32.166 "rw_ios_per_sec": 0, 00:02:32.166 "rw_mbytes_per_sec": 0, 00:02:32.166 "r_mbytes_per_sec": 0, 00:02:32.166 "w_mbytes_per_sec": 0 00:02:32.166 }, 00:02:32.166 "claimed": false, 00:02:32.166 "zoned": false, 00:02:32.166 "supported_io_types": { 00:02:32.166 "read": true, 00:02:32.166 "write": true, 00:02:32.166 "unmap": true, 00:02:32.166 "flush": true, 00:02:32.166 "reset": true, 00:02:32.166 "nvme_admin": false, 00:02:32.167 "nvme_io": false, 00:02:32.167 "nvme_io_md": false, 00:02:32.167 "write_zeroes": true, 00:02:32.167 "zcopy": true, 00:02:32.167 "get_zone_info": false, 00:02:32.167 "zone_management": false, 00:02:32.167 "zone_append": false, 00:02:32.167 "compare": false, 00:02:32.167 "compare_and_write": false, 00:02:32.167 "abort": true, 00:02:32.167 "seek_hole": false, 00:02:32.167 "seek_data": false, 00:02:32.167 "copy": true, 00:02:32.167 "nvme_iov_md": false 00:02:32.167 }, 00:02:32.167 "memory_domains": [ 00:02:32.167 { 00:02:32.167 "dma_device_id": "system", 00:02:32.167 "dma_device_type": 1 00:02:32.167 }, 00:02:32.167 { 00:02:32.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:32.167 "dma_device_type": 2 00:02:32.167 } 00:02:32.167 ], 00:02:32.167 "driver_specific": {} 00:02:32.167 } 00:02:32.167 ]' 00:02:32.167 19:09:05 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:02:32.167 19:09:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:02:32.167 19:09:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:02:32.167 19:09:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.167 19:09:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:32.167 19:09:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.167 19:09:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:02:32.167 19:09:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.167 19:09:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:32.167 19:09:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.426 19:09:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:02:32.426 19:09:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:02:32.426 19:09:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:02:32.426 00:02:32.426 real 0m0.106s 00:02:32.426 user 0m0.056s 00:02:32.426 sys 0m0.017s 00:02:32.426 19:09:06 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:32.426 19:09:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:02:32.426 ************************************ 00:02:32.426 END TEST rpc_plugins 00:02:32.426 ************************************ 00:02:32.426 19:09:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:02:32.426 19:09:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:32.426 19:09:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:32.426 19:09:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:32.426 ************************************ 00:02:32.426 START TEST rpc_trace_cmd_test 00:02:32.426 ************************************ 00:02:32.426 19:09:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:02:32.426 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:02:32.426 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:02:32.426 19:09:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.426 19:09:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:02:32.426 19:09:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.426 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:02:32.426 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3455264", 00:02:32.426 "tpoint_group_mask": "0x8", 00:02:32.426 "iscsi_conn": { 00:02:32.426 "mask": "0x2", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "scsi": { 00:02:32.426 "mask": "0x4", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "bdev": { 00:02:32.426 "mask": "0x8", 00:02:32.426 "tpoint_mask": "0xffffffffffffffff" 00:02:32.426 }, 00:02:32.426 "nvmf_rdma": { 00:02:32.426 "mask": "0x10", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "nvmf_tcp": { 00:02:32.426 "mask": "0x20", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "ftl": { 00:02:32.426 "mask": "0x40", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "blobfs": { 00:02:32.426 "mask": "0x80", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "dsa": { 00:02:32.426 "mask": "0x200", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "thread": { 00:02:32.426 "mask": "0x400", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "nvme_pcie": { 00:02:32.426 "mask": "0x800", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "iaa": { 00:02:32.426 "mask": "0x1000", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "nvme_tcp": { 00:02:32.426 "mask": "0x2000", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "bdev_nvme": { 00:02:32.426 "mask": "0x4000", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "sock": { 00:02:32.426 "mask": "0x8000", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "blob": { 00:02:32.426 "mask": "0x10000", 00:02:32.426 "tpoint_mask": "0x0" 00:02:32.426 }, 00:02:32.426 "bdev_raid": { 00:02:32.426 "mask": "0x20000", 00:02:32.427 "tpoint_mask": "0x0" 00:02:32.427 }, 00:02:32.427 "scheduler": { 00:02:32.427 "mask": "0x40000", 00:02:32.427 "tpoint_mask": "0x0" 00:02:32.427 } 00:02:32.427 }' 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:02:32.427 00:02:32.427 real 0m0.152s 00:02:32.427 user 0m0.125s 00:02:32.427 sys 0m0.019s 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:32.427 19:09:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:02:32.427 ************************************ 00:02:32.427 END TEST rpc_trace_cmd_test 00:02:32.427 ************************************ 00:02:32.427 19:09:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:02:32.427 19:09:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:02:32.427 19:09:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:02:32.427 19:09:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:32.427 19:09:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:32.427 19:09:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:32.686 ************************************ 00:02:32.686 START TEST rpc_daemon_integrity 00:02:32.686 ************************************ 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:02:32.686 { 00:02:32.686 "name": "Malloc2", 00:02:32.686 "aliases": [ 00:02:32.686 "37101f67-d364-4d03-bb88-c5f81e684a5d" 00:02:32.686 ], 00:02:32.686 "product_name": "Malloc disk", 00:02:32.686 "block_size": 512, 00:02:32.686 "num_blocks": 16384, 00:02:32.686 "uuid": "37101f67-d364-4d03-bb88-c5f81e684a5d", 00:02:32.686 "assigned_rate_limits": { 00:02:32.686 "rw_ios_per_sec": 0, 00:02:32.686 "rw_mbytes_per_sec": 0, 00:02:32.686 "r_mbytes_per_sec": 0, 00:02:32.686 "w_mbytes_per_sec": 0 00:02:32.686 }, 00:02:32.686 "claimed": false, 00:02:32.686 "zoned": false, 00:02:32.686 "supported_io_types": { 00:02:32.686 "read": true, 00:02:32.686 "write": true, 00:02:32.686 "unmap": true, 00:02:32.686 "flush": true, 00:02:32.686 "reset": true, 00:02:32.686 "nvme_admin": false, 00:02:32.686 "nvme_io": false, 00:02:32.686 "nvme_io_md": false, 00:02:32.686 "write_zeroes": true, 00:02:32.686 "zcopy": true, 00:02:32.686 "get_zone_info": false, 00:02:32.686 "zone_management": false, 00:02:32.686 "zone_append": false, 00:02:32.686 "compare": false, 00:02:32.686 "compare_and_write": false, 00:02:32.686 "abort": true, 00:02:32.686 "seek_hole": false, 00:02:32.686 "seek_data": false, 00:02:32.686 "copy": true, 00:02:32.686 "nvme_iov_md": false 00:02:32.686 }, 00:02:32.686 "memory_domains": [ 00:02:32.686 { 00:02:32.686 "dma_device_id": "system", 00:02:32.686 "dma_device_type": 1 00:02:32.686 }, 00:02:32.686 { 00:02:32.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:32.686 "dma_device_type": 2 00:02:32.686 } 00:02:32.686 ], 00:02:32.686 "driver_specific": {} 00:02:32.686 } 00:02:32.686 ]' 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.686 [2024-11-26 19:09:06.407114] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:02:32.686 [2024-11-26 19:09:06.407159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:02:32.686 [2024-11-26 19:09:06.407175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1672c50 00:02:32.686 [2024-11-26 19:09:06.407183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:02:32.686 [2024-11-26 19:09:06.408677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:02:32.686 [2024-11-26 19:09:06.408714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:02:32.686 Passthru0 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.686 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:02:32.686 { 00:02:32.686 "name": "Malloc2", 00:02:32.686 "aliases": [ 00:02:32.686 "37101f67-d364-4d03-bb88-c5f81e684a5d" 00:02:32.686 ], 00:02:32.686 "product_name": "Malloc disk", 00:02:32.686 "block_size": 512, 00:02:32.686 "num_blocks": 16384, 00:02:32.686 "uuid": "37101f67-d364-4d03-bb88-c5f81e684a5d", 00:02:32.686 "assigned_rate_limits": { 00:02:32.686 "rw_ios_per_sec": 0, 00:02:32.686 "rw_mbytes_per_sec": 0, 00:02:32.686 "r_mbytes_per_sec": 0, 00:02:32.686 "w_mbytes_per_sec": 0 00:02:32.686 }, 00:02:32.686 "claimed": true, 00:02:32.686 "claim_type": "exclusive_write", 00:02:32.686 "zoned": false, 00:02:32.686 "supported_io_types": { 00:02:32.686 "read": true, 00:02:32.686 "write": true, 00:02:32.686 "unmap": true, 00:02:32.686 "flush": true, 00:02:32.686 "reset": true, 00:02:32.686 "nvme_admin": false, 00:02:32.686 "nvme_io": false, 00:02:32.686 "nvme_io_md": false, 00:02:32.686 "write_zeroes": true, 00:02:32.686 "zcopy": true, 00:02:32.686 "get_zone_info": false, 00:02:32.686 "zone_management": false, 00:02:32.686 "zone_append": false, 00:02:32.686 "compare": false, 00:02:32.686 "compare_and_write": false, 00:02:32.686 "abort": true, 00:02:32.686 "seek_hole": false, 00:02:32.686 "seek_data": false, 00:02:32.686 "copy": true, 00:02:32.686 "nvme_iov_md": false 00:02:32.686 }, 00:02:32.686 "memory_domains": [ 00:02:32.686 { 00:02:32.686 "dma_device_id": "system", 00:02:32.687 "dma_device_type": 1 00:02:32.687 }, 00:02:32.687 { 00:02:32.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:32.687 "dma_device_type": 2 00:02:32.687 } 00:02:32.687 ], 00:02:32.687 "driver_specific": {} 00:02:32.687 }, 00:02:32.687 { 00:02:32.687 "name": "Passthru0", 00:02:32.687 "aliases": [ 00:02:32.687 "6d329561-255d-58a7-be28-70123b67321d" 00:02:32.687 ], 00:02:32.687 "product_name": "passthru", 00:02:32.687 "block_size": 512, 00:02:32.687 "num_blocks": 16384, 00:02:32.687 "uuid": "6d329561-255d-58a7-be28-70123b67321d", 00:02:32.687 "assigned_rate_limits": { 00:02:32.687 "rw_ios_per_sec": 0, 00:02:32.687 "rw_mbytes_per_sec": 0, 00:02:32.687 "r_mbytes_per_sec": 0, 00:02:32.687 "w_mbytes_per_sec": 0 00:02:32.687 }, 00:02:32.687 "claimed": false, 00:02:32.687 "zoned": false, 00:02:32.687 "supported_io_types": { 00:02:32.687 "read": true, 00:02:32.687 "write": true, 00:02:32.687 "unmap": true, 00:02:32.687 "flush": true, 00:02:32.687 "reset": true, 00:02:32.687 "nvme_admin": false, 00:02:32.687 "nvme_io": false, 00:02:32.687 "nvme_io_md": false, 00:02:32.687 "write_zeroes": true, 00:02:32.687 "zcopy": true, 00:02:32.687 "get_zone_info": false, 00:02:32.687 "zone_management": false, 00:02:32.687 "zone_append": false, 00:02:32.687 "compare": false, 00:02:32.687 "compare_and_write": false, 00:02:32.687 "abort": true, 00:02:32.687 "seek_hole": false, 00:02:32.687 "seek_data": false, 00:02:32.687 "copy": true, 00:02:32.687 "nvme_iov_md": false 00:02:32.687 }, 00:02:32.687 "memory_domains": [ 00:02:32.687 { 00:02:32.687 "dma_device_id": "system", 00:02:32.687 "dma_device_type": 1 00:02:32.687 }, 00:02:32.687 { 00:02:32.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:02:32.687 "dma_device_type": 2 00:02:32.687 } 00:02:32.687 ], 00:02:32.687 "driver_specific": { 00:02:32.687 "passthru": { 00:02:32.687 "name": "Passthru0", 00:02:32.687 "base_bdev_name": "Malloc2" 00:02:32.687 } 00:02:32.687 } 00:02:32.687 } 00:02:32.687 ]' 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:02:32.687 00:02:32.687 real 0m0.204s 00:02:32.687 user 0m0.116s 00:02:32.687 sys 0m0.027s 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:32.687 19:09:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:02:32.687 ************************************ 00:02:32.687 END TEST rpc_daemon_integrity 00:02:32.687 ************************************ 00:02:32.687 19:09:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:02:32.687 19:09:06 rpc -- rpc/rpc.sh@84 -- # killprocess 3455264 00:02:32.687 19:09:06 rpc -- common/autotest_common.sh@954 -- # '[' -z 3455264 ']' 00:02:32.687 19:09:06 rpc -- common/autotest_common.sh@958 -- # kill -0 3455264 00:02:32.687 19:09:06 rpc -- common/autotest_common.sh@959 -- # uname 00:02:32.687 19:09:06 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:32.687 19:09:06 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3455264 00:02:32.946 19:09:06 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:32.946 19:09:06 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:32.946 19:09:06 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3455264' 00:02:32.946 killing process with pid 3455264 00:02:32.946 19:09:06 rpc -- common/autotest_common.sh@973 -- # kill 3455264 00:02:32.946 19:09:06 rpc -- common/autotest_common.sh@978 -- # wait 3455264 00:02:33.206 00:02:33.206 real 0m2.145s 00:02:33.206 user 0m2.558s 00:02:33.206 sys 0m0.674s 00:02:33.206 19:09:06 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:33.206 19:09:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:02:33.206 ************************************ 00:02:33.206 END TEST rpc 00:02:33.206 ************************************ 00:02:33.206 19:09:06 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:02:33.206 19:09:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:33.206 19:09:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:33.206 19:09:06 -- common/autotest_common.sh@10 -- # set +x 00:02:33.206 ************************************ 00:02:33.206 START TEST skip_rpc 00:02:33.206 ************************************ 00:02:33.206 19:09:06 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:02:33.206 * Looking for test storage... 00:02:33.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:02:33.206 19:09:06 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:33.206 19:09:06 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:02:33.206 19:09:06 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:33.206 19:09:06 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@345 -- # : 1 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:33.206 19:09:07 skip_rpc -- scripts/common.sh@368 -- # return 0 00:02:33.206 19:09:07 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:33.206 19:09:07 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:33.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.206 --rc genhtml_branch_coverage=1 00:02:33.206 --rc genhtml_function_coverage=1 00:02:33.206 --rc genhtml_legend=1 00:02:33.206 --rc geninfo_all_blocks=1 00:02:33.206 --rc geninfo_unexecuted_blocks=1 00:02:33.206 00:02:33.206 ' 00:02:33.206 19:09:07 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:33.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.206 --rc genhtml_branch_coverage=1 00:02:33.206 --rc genhtml_function_coverage=1 00:02:33.207 --rc genhtml_legend=1 00:02:33.207 --rc geninfo_all_blocks=1 00:02:33.207 --rc geninfo_unexecuted_blocks=1 00:02:33.207 00:02:33.207 ' 00:02:33.207 19:09:07 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:33.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.207 --rc genhtml_branch_coverage=1 00:02:33.207 --rc genhtml_function_coverage=1 00:02:33.207 --rc genhtml_legend=1 00:02:33.207 --rc geninfo_all_blocks=1 00:02:33.207 --rc geninfo_unexecuted_blocks=1 00:02:33.207 00:02:33.207 ' 00:02:33.207 19:09:07 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:33.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:33.207 --rc genhtml_branch_coverage=1 00:02:33.207 --rc genhtml_function_coverage=1 00:02:33.207 --rc genhtml_legend=1 00:02:33.207 --rc geninfo_all_blocks=1 00:02:33.207 --rc geninfo_unexecuted_blocks=1 00:02:33.207 00:02:33.207 ' 00:02:33.207 19:09:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:02:33.207 19:09:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:02:33.207 19:09:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:02:33.207 19:09:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:33.207 19:09:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:33.207 19:09:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:33.207 ************************************ 00:02:33.207 START TEST skip_rpc 00:02:33.207 ************************************ 00:02:33.207 19:09:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:02:33.207 19:09:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3456040 00:02:33.207 19:09:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:33.207 19:09:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:02:33.207 19:09:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:02:33.466 [2024-11-26 19:09:07.087234] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:33.466 [2024-11-26 19:09:07.087298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456040 ] 00:02:33.466 [2024-11-26 19:09:07.173295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:33.466 [2024-11-26 19:09:07.227028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3456040 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3456040 ']' 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3456040 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3456040 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3456040' 00:02:38.740 killing process with pid 3456040 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3456040 00:02:38.740 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3456040 00:02:38.740 00:02:38.740 real 0m5.242s 00:02:38.741 user 0m4.979s 00:02:38.741 sys 0m0.289s 00:02:38.741 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:38.741 19:09:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:38.741 ************************************ 00:02:38.741 END TEST skip_rpc 00:02:38.741 ************************************ 00:02:38.741 19:09:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:02:38.741 19:09:12 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:38.741 19:09:12 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:38.741 19:09:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:38.741 ************************************ 00:02:38.741 START TEST skip_rpc_with_json 00:02:38.741 ************************************ 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3457503 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3457503 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3457503 ']' 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:38.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:38.741 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:02:38.741 [2024-11-26 19:09:12.373162] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:38.741 [2024-11-26 19:09:12.373211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3457503 ] 00:02:38.741 [2024-11-26 19:09:12.438791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:38.741 [2024-11-26 19:09:12.469621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:39.000 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:39.000 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:02:39.000 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:02:39.000 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:39.000 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:39.000 [2024-11-26 19:09:12.636916] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:02:39.000 request: 00:02:39.000 { 00:02:39.000 "trtype": "tcp", 00:02:39.000 "method": "nvmf_get_transports", 00:02:39.000 "req_id": 1 00:02:39.000 } 00:02:39.000 Got JSON-RPC error response 00:02:39.000 response: 00:02:39.000 { 00:02:39.000 "code": -19, 00:02:39.000 "message": "No such device" 00:02:39.001 } 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:39.001 [2024-11-26 19:09:12.649014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:39.001 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:02:39.001 { 00:02:39.001 "subsystems": [ 00:02:39.001 { 00:02:39.001 "subsystem": "fsdev", 00:02:39.001 "config": [ 00:02:39.001 { 00:02:39.001 "method": "fsdev_set_opts", 00:02:39.001 "params": { 00:02:39.001 "fsdev_io_pool_size": 65535, 00:02:39.001 "fsdev_io_cache_size": 256 00:02:39.001 } 00:02:39.001 } 00:02:39.001 ] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "vfio_user_target", 00:02:39.001 "config": null 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "keyring", 00:02:39.001 "config": [] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "iobuf", 00:02:39.001 "config": [ 00:02:39.001 { 00:02:39.001 "method": "iobuf_set_options", 00:02:39.001 "params": { 00:02:39.001 "small_pool_count": 8192, 00:02:39.001 "large_pool_count": 1024, 00:02:39.001 "small_bufsize": 8192, 00:02:39.001 "large_bufsize": 135168, 00:02:39.001 "enable_numa": false 00:02:39.001 } 00:02:39.001 } 00:02:39.001 ] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "sock", 00:02:39.001 "config": [ 00:02:39.001 { 00:02:39.001 "method": "sock_set_default_impl", 00:02:39.001 "params": { 00:02:39.001 "impl_name": "posix" 00:02:39.001 } 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "method": "sock_impl_set_options", 00:02:39.001 "params": { 00:02:39.001 "impl_name": "ssl", 00:02:39.001 "recv_buf_size": 4096, 00:02:39.001 "send_buf_size": 4096, 00:02:39.001 "enable_recv_pipe": true, 00:02:39.001 "enable_quickack": false, 00:02:39.001 "enable_placement_id": 0, 00:02:39.001 "enable_zerocopy_send_server": true, 00:02:39.001 "enable_zerocopy_send_client": false, 00:02:39.001 "zerocopy_threshold": 0, 00:02:39.001 "tls_version": 0, 00:02:39.001 "enable_ktls": false 00:02:39.001 } 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "method": "sock_impl_set_options", 00:02:39.001 "params": { 00:02:39.001 "impl_name": "posix", 00:02:39.001 "recv_buf_size": 2097152, 00:02:39.001 "send_buf_size": 2097152, 00:02:39.001 "enable_recv_pipe": true, 00:02:39.001 "enable_quickack": false, 00:02:39.001 "enable_placement_id": 0, 00:02:39.001 "enable_zerocopy_send_server": true, 00:02:39.001 "enable_zerocopy_send_client": false, 00:02:39.001 "zerocopy_threshold": 0, 00:02:39.001 "tls_version": 0, 00:02:39.001 "enable_ktls": false 00:02:39.001 } 00:02:39.001 } 00:02:39.001 ] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "vmd", 00:02:39.001 "config": [] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "accel", 00:02:39.001 "config": [ 00:02:39.001 { 00:02:39.001 "method": "accel_set_options", 00:02:39.001 "params": { 00:02:39.001 "small_cache_size": 128, 00:02:39.001 "large_cache_size": 16, 00:02:39.001 "task_count": 2048, 00:02:39.001 "sequence_count": 2048, 00:02:39.001 "buf_count": 2048 00:02:39.001 } 00:02:39.001 } 00:02:39.001 ] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "bdev", 00:02:39.001 "config": [ 00:02:39.001 { 00:02:39.001 "method": "bdev_set_options", 00:02:39.001 "params": { 00:02:39.001 "bdev_io_pool_size": 65535, 00:02:39.001 "bdev_io_cache_size": 256, 00:02:39.001 "bdev_auto_examine": true, 00:02:39.001 "iobuf_small_cache_size": 128, 00:02:39.001 "iobuf_large_cache_size": 16 00:02:39.001 } 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "method": "bdev_raid_set_options", 00:02:39.001 "params": { 00:02:39.001 "process_window_size_kb": 1024, 00:02:39.001 "process_max_bandwidth_mb_sec": 0 00:02:39.001 } 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "method": "bdev_iscsi_set_options", 00:02:39.001 "params": { 00:02:39.001 "timeout_sec": 30 00:02:39.001 } 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "method": "bdev_nvme_set_options", 00:02:39.001 "params": { 00:02:39.001 "action_on_timeout": "none", 00:02:39.001 "timeout_us": 0, 00:02:39.001 "timeout_admin_us": 0, 00:02:39.001 "keep_alive_timeout_ms": 10000, 00:02:39.001 "arbitration_burst": 0, 00:02:39.001 "low_priority_weight": 0, 00:02:39.001 "medium_priority_weight": 0, 00:02:39.001 "high_priority_weight": 0, 00:02:39.001 "nvme_adminq_poll_period_us": 10000, 00:02:39.001 "nvme_ioq_poll_period_us": 0, 00:02:39.001 "io_queue_requests": 0, 00:02:39.001 "delay_cmd_submit": true, 00:02:39.001 "transport_retry_count": 4, 00:02:39.001 "bdev_retry_count": 3, 00:02:39.001 "transport_ack_timeout": 0, 00:02:39.001 "ctrlr_loss_timeout_sec": 0, 00:02:39.001 "reconnect_delay_sec": 0, 00:02:39.001 "fast_io_fail_timeout_sec": 0, 00:02:39.001 "disable_auto_failback": false, 00:02:39.001 "generate_uuids": false, 00:02:39.001 "transport_tos": 0, 00:02:39.001 "nvme_error_stat": false, 00:02:39.001 "rdma_srq_size": 0, 00:02:39.001 "io_path_stat": false, 00:02:39.001 "allow_accel_sequence": false, 00:02:39.001 "rdma_max_cq_size": 0, 00:02:39.001 "rdma_cm_event_timeout_ms": 0, 00:02:39.001 "dhchap_digests": [ 00:02:39.001 "sha256", 00:02:39.001 "sha384", 00:02:39.001 "sha512" 00:02:39.001 ], 00:02:39.001 "dhchap_dhgroups": [ 00:02:39.001 "null", 00:02:39.001 "ffdhe2048", 00:02:39.001 "ffdhe3072", 00:02:39.001 "ffdhe4096", 00:02:39.001 "ffdhe6144", 00:02:39.001 "ffdhe8192" 00:02:39.001 ] 00:02:39.001 } 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "method": "bdev_nvme_set_hotplug", 00:02:39.001 "params": { 00:02:39.001 "period_us": 100000, 00:02:39.001 "enable": false 00:02:39.001 } 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "method": "bdev_wait_for_examine" 00:02:39.001 } 00:02:39.001 ] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "scsi", 00:02:39.001 "config": null 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "scheduler", 00:02:39.001 "config": [ 00:02:39.001 { 00:02:39.001 "method": "framework_set_scheduler", 00:02:39.001 "params": { 00:02:39.001 "name": "static" 00:02:39.001 } 00:02:39.001 } 00:02:39.001 ] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "vhost_scsi", 00:02:39.001 "config": [] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "vhost_blk", 00:02:39.001 "config": [] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "ublk", 00:02:39.001 "config": [] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "nbd", 00:02:39.001 "config": [] 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "subsystem": "nvmf", 00:02:39.001 "config": [ 00:02:39.001 { 00:02:39.001 "method": "nvmf_set_config", 00:02:39.001 "params": { 00:02:39.001 "discovery_filter": "match_any", 00:02:39.001 "admin_cmd_passthru": { 00:02:39.001 "identify_ctrlr": false 00:02:39.001 }, 00:02:39.001 "dhchap_digests": [ 00:02:39.001 "sha256", 00:02:39.001 "sha384", 00:02:39.001 "sha512" 00:02:39.001 ], 00:02:39.001 "dhchap_dhgroups": [ 00:02:39.001 "null", 00:02:39.001 "ffdhe2048", 00:02:39.001 "ffdhe3072", 00:02:39.001 "ffdhe4096", 00:02:39.001 "ffdhe6144", 00:02:39.001 "ffdhe8192" 00:02:39.001 ] 00:02:39.001 } 00:02:39.001 }, 00:02:39.001 { 00:02:39.001 "method": "nvmf_set_max_subsystems", 00:02:39.001 "params": { 00:02:39.001 "max_subsystems": 1024 00:02:39.001 } 00:02:39.002 }, 00:02:39.002 { 00:02:39.002 "method": "nvmf_set_crdt", 00:02:39.002 "params": { 00:02:39.002 "crdt1": 0, 00:02:39.002 "crdt2": 0, 00:02:39.002 "crdt3": 0 00:02:39.002 } 00:02:39.002 }, 00:02:39.002 { 00:02:39.002 "method": "nvmf_create_transport", 00:02:39.002 "params": { 00:02:39.002 "trtype": "TCP", 00:02:39.002 "max_queue_depth": 128, 00:02:39.002 "max_io_qpairs_per_ctrlr": 127, 00:02:39.002 "in_capsule_data_size": 4096, 00:02:39.002 "max_io_size": 131072, 00:02:39.002 "io_unit_size": 131072, 00:02:39.002 "max_aq_depth": 128, 00:02:39.002 "num_shared_buffers": 511, 00:02:39.002 "buf_cache_size": 4294967295, 00:02:39.002 "dif_insert_or_strip": false, 00:02:39.002 "zcopy": false, 00:02:39.002 "c2h_success": true, 00:02:39.002 "sock_priority": 0, 00:02:39.002 "abort_timeout_sec": 1, 00:02:39.002 "ack_timeout": 0, 00:02:39.002 "data_wr_pool_size": 0 00:02:39.002 } 00:02:39.002 } 00:02:39.002 ] 00:02:39.002 }, 00:02:39.002 { 00:02:39.002 "subsystem": "iscsi", 00:02:39.002 "config": [ 00:02:39.002 { 00:02:39.002 "method": "iscsi_set_options", 00:02:39.002 "params": { 00:02:39.002 "node_base": "iqn.2016-06.io.spdk", 00:02:39.002 "max_sessions": 128, 00:02:39.002 "max_connections_per_session": 2, 00:02:39.002 "max_queue_depth": 64, 00:02:39.002 "default_time2wait": 2, 00:02:39.002 "default_time2retain": 20, 00:02:39.002 "first_burst_length": 8192, 00:02:39.002 "immediate_data": true, 00:02:39.002 "allow_duplicated_isid": false, 00:02:39.002 "error_recovery_level": 0, 00:02:39.002 "nop_timeout": 60, 00:02:39.002 "nop_in_interval": 30, 00:02:39.002 "disable_chap": false, 00:02:39.002 "require_chap": false, 00:02:39.002 "mutual_chap": false, 00:02:39.002 "chap_group": 0, 00:02:39.002 "max_large_datain_per_connection": 64, 00:02:39.002 "max_r2t_per_connection": 4, 00:02:39.002 "pdu_pool_size": 36864, 00:02:39.002 "immediate_data_pool_size": 16384, 00:02:39.002 "data_out_pool_size": 2048 00:02:39.002 } 00:02:39.002 } 00:02:39.002 ] 00:02:39.002 } 00:02:39.002 ] 00:02:39.002 } 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3457503 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3457503 ']' 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3457503 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3457503 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3457503' 00:02:39.002 killing process with pid 3457503 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3457503 00:02:39.002 19:09:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3457503 00:02:39.262 19:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3457649 00:02:39.262 19:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:02:39.262 19:09:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3457649 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3457649 ']' 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3457649 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3457649 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3457649' 00:02:44.537 killing process with pid 3457649 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3457649 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3457649 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:02:44.537 00:02:44.537 real 0m5.953s 00:02:44.537 user 0m5.732s 00:02:44.537 sys 0m0.466s 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:02:44.537 ************************************ 00:02:44.537 END TEST skip_rpc_with_json 00:02:44.537 ************************************ 00:02:44.537 19:09:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:02:44.537 19:09:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:44.537 19:09:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:44.537 19:09:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:44.537 ************************************ 00:02:44.537 START TEST skip_rpc_with_delay 00:02:44.537 ************************************ 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:02:44.537 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:02:44.538 [2024-11-26 19:09:18.371432] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:02:44.538 00:02:44.538 real 0m0.056s 00:02:44.538 user 0m0.034s 00:02:44.538 sys 0m0.022s 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:44.538 19:09:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:02:44.538 ************************************ 00:02:44.538 END TEST skip_rpc_with_delay 00:02:44.538 ************************************ 00:02:44.797 19:09:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:02:44.797 19:09:18 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:02:44.797 19:09:18 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:02:44.797 19:09:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:44.797 19:09:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:44.797 19:09:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:44.797 ************************************ 00:02:44.797 START TEST exit_on_failed_rpc_init 00:02:44.797 ************************************ 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3458991 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3458991 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3458991 ']' 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:44.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:02:44.797 19:09:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:02:44.797 [2024-11-26 19:09:18.479121] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:44.797 [2024-11-26 19:09:18.479182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458991 ] 00:02:44.797 [2024-11-26 19:09:18.550269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:44.797 [2024-11-26 19:09:18.587965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:02:45.060 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:02:45.060 [2024-11-26 19:09:18.797851] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:45.060 [2024-11-26 19:09:18.797902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459034 ] 00:02:45.060 [2024-11-26 19:09:18.875080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:45.060 [2024-11-26 19:09:18.910915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:02:45.060 [2024-11-26 19:09:18.910963] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:02:45.060 [2024-11-26 19:09:18.910973] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:02:45.060 [2024-11-26 19:09:18.910979] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3458991 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3458991 ']' 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3458991 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3458991 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3458991' 00:02:45.391 killing process with pid 3458991 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3458991 00:02:45.391 19:09:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3458991 00:02:45.391 00:02:45.391 real 0m0.742s 00:02:45.391 user 0m0.831s 00:02:45.391 sys 0m0.304s 00:02:45.391 19:09:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:45.391 19:09:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:02:45.391 ************************************ 00:02:45.391 END TEST exit_on_failed_rpc_init 00:02:45.391 ************************************ 00:02:45.391 19:09:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:02:45.391 00:02:45.391 real 0m12.320s 00:02:45.391 user 0m11.702s 00:02:45.391 sys 0m1.298s 00:02:45.391 19:09:19 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:45.391 19:09:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:45.391 ************************************ 00:02:45.391 END TEST skip_rpc 00:02:45.391 ************************************ 00:02:45.391 19:09:19 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:02:45.391 19:09:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:45.391 19:09:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:45.391 19:09:19 -- common/autotest_common.sh@10 -- # set +x 00:02:45.724 ************************************ 00:02:45.724 START TEST rpc_client 00:02:45.724 ************************************ 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:02:45.724 * Looking for test storage... 00:02:45.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@345 -- # : 1 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@353 -- # local d=1 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@355 -- # echo 1 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@353 -- # local d=2 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@355 -- # echo 2 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:45.724 19:09:19 rpc_client -- scripts/common.sh@368 -- # return 0 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:45.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.724 --rc genhtml_branch_coverage=1 00:02:45.724 --rc genhtml_function_coverage=1 00:02:45.724 --rc genhtml_legend=1 00:02:45.724 --rc geninfo_all_blocks=1 00:02:45.724 --rc geninfo_unexecuted_blocks=1 00:02:45.724 00:02:45.724 ' 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:45.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.724 --rc genhtml_branch_coverage=1 00:02:45.724 --rc genhtml_function_coverage=1 00:02:45.724 --rc genhtml_legend=1 00:02:45.724 --rc geninfo_all_blocks=1 00:02:45.724 --rc geninfo_unexecuted_blocks=1 00:02:45.724 00:02:45.724 ' 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:45.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.724 --rc genhtml_branch_coverage=1 00:02:45.724 --rc genhtml_function_coverage=1 00:02:45.724 --rc genhtml_legend=1 00:02:45.724 --rc geninfo_all_blocks=1 00:02:45.724 --rc geninfo_unexecuted_blocks=1 00:02:45.724 00:02:45.724 ' 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:45.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.724 --rc genhtml_branch_coverage=1 00:02:45.724 --rc genhtml_function_coverage=1 00:02:45.724 --rc genhtml_legend=1 00:02:45.724 --rc geninfo_all_blocks=1 00:02:45.724 --rc geninfo_unexecuted_blocks=1 00:02:45.724 00:02:45.724 ' 00:02:45.724 19:09:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:02:45.724 OK 00:02:45.724 19:09:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:02:45.724 00:02:45.724 real 0m0.133s 00:02:45.724 user 0m0.081s 00:02:45.724 sys 0m0.058s 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:45.724 19:09:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:02:45.724 ************************************ 00:02:45.724 END TEST rpc_client 00:02:45.724 ************************************ 00:02:45.724 19:09:19 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:02:45.724 19:09:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:45.725 19:09:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:45.725 19:09:19 -- common/autotest_common.sh@10 -- # set +x 00:02:45.725 ************************************ 00:02:45.725 START TEST json_config 00:02:45.725 ************************************ 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:45.725 19:09:19 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:45.725 19:09:19 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:45.725 19:09:19 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:45.725 19:09:19 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:02:45.725 19:09:19 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:02:45.725 19:09:19 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:02:45.725 19:09:19 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:02:45.725 19:09:19 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:02:45.725 19:09:19 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:02:45.725 19:09:19 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:02:45.725 19:09:19 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:45.725 19:09:19 json_config -- scripts/common.sh@344 -- # case "$op" in 00:02:45.725 19:09:19 json_config -- scripts/common.sh@345 -- # : 1 00:02:45.725 19:09:19 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:45.725 19:09:19 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:45.725 19:09:19 json_config -- scripts/common.sh@365 -- # decimal 1 00:02:45.725 19:09:19 json_config -- scripts/common.sh@353 -- # local d=1 00:02:45.725 19:09:19 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:45.725 19:09:19 json_config -- scripts/common.sh@355 -- # echo 1 00:02:45.725 19:09:19 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:02:45.725 19:09:19 json_config -- scripts/common.sh@366 -- # decimal 2 00:02:45.725 19:09:19 json_config -- scripts/common.sh@353 -- # local d=2 00:02:45.725 19:09:19 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:45.725 19:09:19 json_config -- scripts/common.sh@355 -- # echo 2 00:02:45.725 19:09:19 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:02:45.725 19:09:19 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:45.725 19:09:19 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:45.725 19:09:19 json_config -- scripts/common.sh@368 -- # return 0 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:45.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.725 --rc genhtml_branch_coverage=1 00:02:45.725 --rc genhtml_function_coverage=1 00:02:45.725 --rc genhtml_legend=1 00:02:45.725 --rc geninfo_all_blocks=1 00:02:45.725 --rc geninfo_unexecuted_blocks=1 00:02:45.725 00:02:45.725 ' 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:45.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.725 --rc genhtml_branch_coverage=1 00:02:45.725 --rc genhtml_function_coverage=1 00:02:45.725 --rc genhtml_legend=1 00:02:45.725 --rc geninfo_all_blocks=1 00:02:45.725 --rc geninfo_unexecuted_blocks=1 00:02:45.725 00:02:45.725 ' 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:45.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.725 --rc genhtml_branch_coverage=1 00:02:45.725 --rc genhtml_function_coverage=1 00:02:45.725 --rc genhtml_legend=1 00:02:45.725 --rc geninfo_all_blocks=1 00:02:45.725 --rc geninfo_unexecuted_blocks=1 00:02:45.725 00:02:45.725 ' 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:45.725 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:45.725 --rc genhtml_branch_coverage=1 00:02:45.725 --rc genhtml_function_coverage=1 00:02:45.725 --rc genhtml_legend=1 00:02:45.725 --rc geninfo_all_blocks=1 00:02:45.725 --rc geninfo_unexecuted_blocks=1 00:02:45.725 00:02:45.725 ' 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@7 -- # uname -s 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:45.725 19:09:19 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:02:45.725 19:09:19 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:45.725 19:09:19 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.725 19:09:19 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.725 19:09:19 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.725 19:09:19 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.725 19:09:19 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.725 19:09:19 json_config -- paths/export.sh@5 -- # export PATH 00:02:45.725 19:09:19 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@51 -- # : 0 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:45.725 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:45.725 19:09:19 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:02:45.725 INFO: JSON configuration test init 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:02:45.725 19:09:19 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:45.725 19:09:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:45.726 19:09:19 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:02:45.726 19:09:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:45.726 19:09:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:45.726 19:09:19 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:02:45.726 19:09:19 json_config -- json_config/common.sh@9 -- # local app=target 00:02:45.726 19:09:19 json_config -- json_config/common.sh@10 -- # shift 00:02:45.726 19:09:19 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:02:45.726 19:09:19 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:02:45.726 19:09:19 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:02:45.726 19:09:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:45.726 19:09:19 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:45.726 19:09:19 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3459199 00:02:45.726 19:09:19 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:02:45.726 Waiting for target to run... 00:02:45.726 19:09:19 json_config -- json_config/common.sh@25 -- # waitforlisten 3459199 /var/tmp/spdk_tgt.sock 00:02:45.726 19:09:19 json_config -- common/autotest_common.sh@835 -- # '[' -z 3459199 ']' 00:02:45.726 19:09:19 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:02:45.726 19:09:19 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:02:45.726 19:09:19 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:45.726 19:09:19 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:02:45.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:02:45.726 19:09:19 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:45.726 19:09:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:45.985 [2024-11-26 19:09:19.611111] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:45.985 [2024-11-26 19:09:19.611188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3459199 ] 00:02:46.243 [2024-11-26 19:09:19.873854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:46.243 [2024-11-26 19:09:19.897537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:46.810 19:09:20 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:46.810 19:09:20 json_config -- common/autotest_common.sh@868 -- # return 0 00:02:46.810 19:09:20 json_config -- json_config/common.sh@26 -- # echo '' 00:02:46.810 00:02:46.810 19:09:20 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:02:46.810 19:09:20 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:02:46.810 19:09:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:46.810 19:09:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:46.810 19:09:20 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:02:46.810 19:09:20 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:02:46.810 19:09:20 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:46.810 19:09:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:46.810 19:09:20 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:02:46.811 19:09:20 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:02:46.811 19:09:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:02:47.378 19:09:20 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:47.378 19:09:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:02:47.378 19:09:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:02:47.378 19:09:20 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@51 -- # local get_types 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@54 -- # sort 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:02:47.378 19:09:21 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:47.378 19:09:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@62 -- # return 0 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:02:47.378 19:09:21 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:47.378 19:09:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:02:47.378 19:09:21 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:02:47.378 19:09:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:02:47.638 MallocForNvmf0 00:02:47.638 19:09:21 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:02:47.638 19:09:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:02:47.638 MallocForNvmf1 00:02:47.638 19:09:21 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:02:47.638 19:09:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:02:47.898 [2024-11-26 19:09:21.588452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:02:47.898 19:09:21 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:02:47.898 19:09:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:02:47.898 19:09:21 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:02:47.898 19:09:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:02:48.157 19:09:21 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:02:48.157 19:09:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:02:48.416 19:09:22 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:02:48.416 19:09:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:02:48.416 [2024-11-26 19:09:22.206402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:02:48.416 19:09:22 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:02:48.416 19:09:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:48.416 19:09:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:48.416 19:09:22 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:02:48.416 19:09:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:48.416 19:09:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:48.416 19:09:22 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:02:48.416 19:09:22 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:02:48.416 19:09:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:02:48.675 MallocBdevForConfigChangeCheck 00:02:48.675 19:09:22 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:02:48.675 19:09:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:48.675 19:09:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:48.675 19:09:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:02:48.675 19:09:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:48.935 19:09:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:02:48.935 INFO: shutting down applications... 00:02:48.935 19:09:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:02:48.935 19:09:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:02:48.935 19:09:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:02:48.935 19:09:22 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:02:49.502 Calling clear_iscsi_subsystem 00:02:49.502 Calling clear_nvmf_subsystem 00:02:49.502 Calling clear_nbd_subsystem 00:02:49.502 Calling clear_ublk_subsystem 00:02:49.502 Calling clear_vhost_blk_subsystem 00:02:49.502 Calling clear_vhost_scsi_subsystem 00:02:49.502 Calling clear_bdev_subsystem 00:02:49.502 19:09:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:02:49.502 19:09:23 json_config -- json_config/json_config.sh@350 -- # count=100 00:02:49.502 19:09:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:02:49.502 19:09:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:49.502 19:09:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:02:49.502 19:09:23 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:02:49.762 19:09:23 json_config -- json_config/json_config.sh@352 -- # break 00:02:49.762 19:09:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:02:49.762 19:09:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:02:49.762 19:09:23 json_config -- json_config/common.sh@31 -- # local app=target 00:02:49.762 19:09:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:02:49.762 19:09:23 json_config -- json_config/common.sh@35 -- # [[ -n 3459199 ]] 00:02:49.762 19:09:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3459199 00:02:49.762 19:09:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:02:49.762 19:09:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:02:49.762 19:09:23 json_config -- json_config/common.sh@41 -- # kill -0 3459199 00:02:49.762 19:09:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:02:50.331 19:09:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:02:50.331 19:09:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:02:50.331 19:09:23 json_config -- json_config/common.sh@41 -- # kill -0 3459199 00:02:50.331 19:09:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:02:50.331 19:09:23 json_config -- json_config/common.sh@43 -- # break 00:02:50.331 19:09:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:02:50.331 19:09:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:02:50.331 SPDK target shutdown done 00:02:50.331 19:09:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:02:50.331 INFO: relaunching applications... 00:02:50.331 19:09:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:50.332 19:09:23 json_config -- json_config/common.sh@9 -- # local app=target 00:02:50.332 19:09:23 json_config -- json_config/common.sh@10 -- # shift 00:02:50.332 19:09:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:02:50.332 19:09:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:02:50.332 19:09:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:02:50.332 19:09:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:50.332 19:09:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:50.332 19:09:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3460307 00:02:50.332 19:09:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:02:50.332 Waiting for target to run... 00:02:50.332 19:09:23 json_config -- json_config/common.sh@25 -- # waitforlisten 3460307 /var/tmp/spdk_tgt.sock 00:02:50.332 19:09:23 json_config -- common/autotest_common.sh@835 -- # '[' -z 3460307 ']' 00:02:50.332 19:09:23 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:02:50.332 19:09:23 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:50.332 19:09:23 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:02:50.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:02:50.332 19:09:23 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:50.332 19:09:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:50.332 19:09:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:50.332 [2024-11-26 19:09:23.982083] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:50.332 [2024-11-26 19:09:23.982157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3460307 ] 00:02:50.591 [2024-11-26 19:09:24.216826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:50.591 [2024-11-26 19:09:24.239912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:51.160 [2024-11-26 19:09:24.742719] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:02:51.160 [2024-11-26 19:09:24.775069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:02:51.160 19:09:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:51.160 19:09:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:02:51.160 19:09:24 json_config -- json_config/common.sh@26 -- # echo '' 00:02:51.160 00:02:51.160 19:09:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:02:51.160 19:09:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:02:51.160 INFO: Checking if target configuration is the same... 00:02:51.160 19:09:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:51.160 19:09:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:02:51.160 19:09:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:51.160 + '[' 2 -ne 2 ']' 00:02:51.160 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:02:51.160 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:02:51.160 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.160 +++ basename /dev/fd/62 00:02:51.160 ++ mktemp /tmp/62.XXX 00:02:51.160 + tmp_file_1=/tmp/62.ao3 00:02:51.160 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:51.160 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:02:51.160 + tmp_file_2=/tmp/spdk_tgt_config.json.tbC 00:02:51.160 + ret=0 00:02:51.160 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:02:51.420 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:02:51.420 + diff -u /tmp/62.ao3 /tmp/spdk_tgt_config.json.tbC 00:02:51.420 + echo 'INFO: JSON config files are the same' 00:02:51.420 INFO: JSON config files are the same 00:02:51.420 + rm /tmp/62.ao3 /tmp/spdk_tgt_config.json.tbC 00:02:51.420 + exit 0 00:02:51.420 19:09:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:02:51.420 19:09:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:02:51.420 INFO: changing configuration and checking if this can be detected... 00:02:51.420 19:09:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:02:51.420 19:09:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:02:51.679 19:09:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:51.679 19:09:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:02:51.679 19:09:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:02:51.679 + '[' 2 -ne 2 ']' 00:02:51.679 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:02:51.679 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:02:51.679 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:51.679 +++ basename /dev/fd/62 00:02:51.679 ++ mktemp /tmp/62.XXX 00:02:51.679 + tmp_file_1=/tmp/62.VW3 00:02:51.679 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:51.679 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:02:51.679 + tmp_file_2=/tmp/spdk_tgt_config.json.Vyo 00:02:51.679 + ret=0 00:02:51.679 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:02:51.938 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:02:51.938 + diff -u /tmp/62.VW3 /tmp/spdk_tgt_config.json.Vyo 00:02:51.938 + ret=1 00:02:51.938 + echo '=== Start of file: /tmp/62.VW3 ===' 00:02:51.938 + cat /tmp/62.VW3 00:02:51.938 + echo '=== End of file: /tmp/62.VW3 ===' 00:02:51.938 + echo '' 00:02:51.938 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Vyo ===' 00:02:51.938 + cat /tmp/spdk_tgt_config.json.Vyo 00:02:51.938 + echo '=== End of file: /tmp/spdk_tgt_config.json.Vyo ===' 00:02:51.938 + echo '' 00:02:51.938 + rm /tmp/62.VW3 /tmp/spdk_tgt_config.json.Vyo 00:02:51.938 + exit 1 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:02:51.938 INFO: configuration change detected. 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 3460307 ]] 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:51.938 19:09:25 json_config -- json_config/json_config.sh@330 -- # killprocess 3460307 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@954 -- # '[' -z 3460307 ']' 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@958 -- # kill -0 3460307 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@959 -- # uname 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3460307 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3460307' 00:02:51.938 killing process with pid 3460307 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@973 -- # kill 3460307 00:02:51.938 19:09:25 json_config -- common/autotest_common.sh@978 -- # wait 3460307 00:02:52.197 19:09:25 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:02:52.197 19:09:25 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:02:52.197 19:09:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:52.197 19:09:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:52.197 19:09:26 json_config -- json_config/json_config.sh@335 -- # return 0 00:02:52.197 19:09:26 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:02:52.197 INFO: Success 00:02:52.197 00:02:52.197 real 0m6.574s 00:02:52.197 user 0m7.836s 00:02:52.197 sys 0m1.472s 00:02:52.197 19:09:26 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:52.197 19:09:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:02:52.197 ************************************ 00:02:52.197 END TEST json_config 00:02:52.197 ************************************ 00:02:52.197 19:09:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:02:52.197 19:09:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:52.197 19:09:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:52.197 19:09:26 -- common/autotest_common.sh@10 -- # set +x 00:02:52.197 ************************************ 00:02:52.197 START TEST json_config_extra_key 00:02:52.197 ************************************ 00:02:52.197 19:09:26 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:52.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.457 --rc genhtml_branch_coverage=1 00:02:52.457 --rc genhtml_function_coverage=1 00:02:52.457 --rc genhtml_legend=1 00:02:52.457 --rc geninfo_all_blocks=1 00:02:52.457 --rc geninfo_unexecuted_blocks=1 00:02:52.457 00:02:52.457 ' 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:52.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.457 --rc genhtml_branch_coverage=1 00:02:52.457 --rc genhtml_function_coverage=1 00:02:52.457 --rc genhtml_legend=1 00:02:52.457 --rc geninfo_all_blocks=1 00:02:52.457 --rc geninfo_unexecuted_blocks=1 00:02:52.457 00:02:52.457 ' 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:52.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.457 --rc genhtml_branch_coverage=1 00:02:52.457 --rc genhtml_function_coverage=1 00:02:52.457 --rc genhtml_legend=1 00:02:52.457 --rc geninfo_all_blocks=1 00:02:52.457 --rc geninfo_unexecuted_blocks=1 00:02:52.457 00:02:52.457 ' 00:02:52.457 19:09:26 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:52.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:52.457 --rc genhtml_branch_coverage=1 00:02:52.457 --rc genhtml_function_coverage=1 00:02:52.457 --rc genhtml_legend=1 00:02:52.457 --rc geninfo_all_blocks=1 00:02:52.457 --rc geninfo_unexecuted_blocks=1 00:02:52.457 00:02:52.457 ' 00:02:52.457 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:52.457 19:09:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.457 19:09:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.457 19:09:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.457 19:09:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.457 19:09:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.457 19:09:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:02:52.458 19:09:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:52.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:52.458 19:09:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:02:52.458 INFO: launching applications... 00:02:52.458 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3461087 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:02:52.458 Waiting for target to run... 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3461087 /var/tmp/spdk_tgt.sock 00:02:52.458 19:09:26 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3461087 ']' 00:02:52.458 19:09:26 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:02:52.458 19:09:26 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:52.458 19:09:26 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:02:52.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:02:52.458 19:09:26 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:52.458 19:09:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:02:52.458 19:09:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:02:52.458 [2024-11-26 19:09:26.213714] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:52.458 [2024-11-26 19:09:26.213769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461087 ] 00:02:52.717 [2024-11-26 19:09:26.449650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:52.717 [2024-11-26 19:09:26.472677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:53.288 19:09:26 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:53.288 19:09:26 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:02:53.288 00:02:53.288 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:02:53.288 INFO: shutting down applications... 00:02:53.288 19:09:26 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3461087 ]] 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3461087 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3461087 00:02:53.288 19:09:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:02:53.857 19:09:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:02:53.857 19:09:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:02:53.857 19:09:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3461087 00:02:53.857 19:09:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:02:53.857 19:09:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:02:53.857 19:09:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:02:53.857 19:09:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:02:53.857 SPDK target shutdown done 00:02:53.857 19:09:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:02:53.857 Success 00:02:53.857 00:02:53.857 real 0m1.433s 00:02:53.857 user 0m1.109s 00:02:53.857 sys 0m0.296s 00:02:53.857 19:09:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:53.857 19:09:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:02:53.857 ************************************ 00:02:53.857 END TEST json_config_extra_key 00:02:53.857 ************************************ 00:02:53.857 19:09:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:02:53.857 19:09:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:53.857 19:09:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:53.857 19:09:27 -- common/autotest_common.sh@10 -- # set +x 00:02:53.857 ************************************ 00:02:53.857 START TEST alias_rpc 00:02:53.857 ************************************ 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:02:53.857 * Looking for test storage... 00:02:53.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:53.857 19:09:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:53.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.857 --rc genhtml_branch_coverage=1 00:02:53.857 --rc genhtml_function_coverage=1 00:02:53.857 --rc genhtml_legend=1 00:02:53.857 --rc geninfo_all_blocks=1 00:02:53.857 --rc geninfo_unexecuted_blocks=1 00:02:53.857 00:02:53.857 ' 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:53.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.857 --rc genhtml_branch_coverage=1 00:02:53.857 --rc genhtml_function_coverage=1 00:02:53.857 --rc genhtml_legend=1 00:02:53.857 --rc geninfo_all_blocks=1 00:02:53.857 --rc geninfo_unexecuted_blocks=1 00:02:53.857 00:02:53.857 ' 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:53.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.857 --rc genhtml_branch_coverage=1 00:02:53.857 --rc genhtml_function_coverage=1 00:02:53.857 --rc genhtml_legend=1 00:02:53.857 --rc geninfo_all_blocks=1 00:02:53.857 --rc geninfo_unexecuted_blocks=1 00:02:53.857 00:02:53.857 ' 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:53.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:53.857 --rc genhtml_branch_coverage=1 00:02:53.857 --rc genhtml_function_coverage=1 00:02:53.857 --rc genhtml_legend=1 00:02:53.857 --rc geninfo_all_blocks=1 00:02:53.857 --rc geninfo_unexecuted_blocks=1 00:02:53.857 00:02:53.857 ' 00:02:53.857 19:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:02:53.857 19:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3461480 00:02:53.857 19:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3461480 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3461480 ']' 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:53.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:53.857 19:09:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:53.857 19:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:53.857 [2024-11-26 19:09:27.696260] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:53.857 [2024-11-26 19:09:27.696311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461480 ] 00:02:54.117 [2024-11-26 19:09:27.756161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:54.117 [2024-11-26 19:09:27.787973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:54.117 19:09:27 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:54.117 19:09:27 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:02:54.117 19:09:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:02:54.375 19:09:28 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3461480 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3461480 ']' 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3461480 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461480 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461480' 00:02:54.375 killing process with pid 3461480 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@973 -- # kill 3461480 00:02:54.375 19:09:28 alias_rpc -- common/autotest_common.sh@978 -- # wait 3461480 00:02:54.634 00:02:54.634 real 0m0.834s 00:02:54.634 user 0m0.875s 00:02:54.634 sys 0m0.312s 00:02:54.634 19:09:28 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:54.634 19:09:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:02:54.634 ************************************ 00:02:54.634 END TEST alias_rpc 00:02:54.634 ************************************ 00:02:54.634 19:09:28 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:02:54.634 19:09:28 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:02:54.634 19:09:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:54.634 19:09:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:54.634 19:09:28 -- common/autotest_common.sh@10 -- # set +x 00:02:54.634 ************************************ 00:02:54.634 START TEST spdkcli_tcp 00:02:54.634 ************************************ 00:02:54.634 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:02:54.634 * Looking for test storage... 00:02:54.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:02:54.634 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:54.634 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:02:54.634 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:54.893 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:54.893 19:09:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:54.893 19:09:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:54.893 19:09:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:54.894 19:09:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:54.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.894 --rc genhtml_branch_coverage=1 00:02:54.894 --rc genhtml_function_coverage=1 00:02:54.894 --rc genhtml_legend=1 00:02:54.894 --rc geninfo_all_blocks=1 00:02:54.894 --rc geninfo_unexecuted_blocks=1 00:02:54.894 00:02:54.894 ' 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:54.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.894 --rc genhtml_branch_coverage=1 00:02:54.894 --rc genhtml_function_coverage=1 00:02:54.894 --rc genhtml_legend=1 00:02:54.894 --rc geninfo_all_blocks=1 00:02:54.894 --rc geninfo_unexecuted_blocks=1 00:02:54.894 00:02:54.894 ' 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:54.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.894 --rc genhtml_branch_coverage=1 00:02:54.894 --rc genhtml_function_coverage=1 00:02:54.894 --rc genhtml_legend=1 00:02:54.894 --rc geninfo_all_blocks=1 00:02:54.894 --rc geninfo_unexecuted_blocks=1 00:02:54.894 00:02:54.894 ' 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:54.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:54.894 --rc genhtml_branch_coverage=1 00:02:54.894 --rc genhtml_function_coverage=1 00:02:54.894 --rc genhtml_legend=1 00:02:54.894 --rc geninfo_all_blocks=1 00:02:54.894 --rc geninfo_unexecuted_blocks=1 00:02:54.894 00:02:54.894 ' 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3461557 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3461557 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3461557 ']' 00:02:54.894 19:09:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:54.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:54.894 19:09:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:02:54.894 [2024-11-26 19:09:28.580544] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:54.894 [2024-11-26 19:09:28.580604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461557 ] 00:02:54.894 [2024-11-26 19:09:28.649365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:02:54.894 [2024-11-26 19:09:28.683576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:02:54.894 [2024-11-26 19:09:28.683580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:55.837 19:09:29 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:55.837 19:09:29 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:02:55.837 19:09:29 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3461888 00:02:55.837 19:09:29 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:02:55.837 19:09:29 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:02:55.837 [ 00:02:55.837 "bdev_malloc_delete", 00:02:55.837 "bdev_malloc_create", 00:02:55.837 "bdev_null_resize", 00:02:55.837 "bdev_null_delete", 00:02:55.837 "bdev_null_create", 00:02:55.837 "bdev_nvme_cuse_unregister", 00:02:55.837 "bdev_nvme_cuse_register", 00:02:55.837 "bdev_opal_new_user", 00:02:55.837 "bdev_opal_set_lock_state", 00:02:55.837 "bdev_opal_delete", 00:02:55.837 "bdev_opal_get_info", 00:02:55.837 "bdev_opal_create", 00:02:55.837 "bdev_nvme_opal_revert", 00:02:55.837 "bdev_nvme_opal_init", 00:02:55.837 "bdev_nvme_send_cmd", 00:02:55.837 "bdev_nvme_set_keys", 00:02:55.837 "bdev_nvme_get_path_iostat", 00:02:55.837 "bdev_nvme_get_mdns_discovery_info", 00:02:55.837 "bdev_nvme_stop_mdns_discovery", 00:02:55.837 "bdev_nvme_start_mdns_discovery", 00:02:55.837 "bdev_nvme_set_multipath_policy", 00:02:55.837 "bdev_nvme_set_preferred_path", 00:02:55.837 "bdev_nvme_get_io_paths", 00:02:55.837 "bdev_nvme_remove_error_injection", 00:02:55.837 "bdev_nvme_add_error_injection", 00:02:55.837 "bdev_nvme_get_discovery_info", 00:02:55.837 "bdev_nvme_stop_discovery", 00:02:55.837 "bdev_nvme_start_discovery", 00:02:55.837 "bdev_nvme_get_controller_health_info", 00:02:55.837 "bdev_nvme_disable_controller", 00:02:55.837 "bdev_nvme_enable_controller", 00:02:55.837 "bdev_nvme_reset_controller", 00:02:55.837 "bdev_nvme_get_transport_statistics", 00:02:55.837 "bdev_nvme_apply_firmware", 00:02:55.837 "bdev_nvme_detach_controller", 00:02:55.837 "bdev_nvme_get_controllers", 00:02:55.837 "bdev_nvme_attach_controller", 00:02:55.837 "bdev_nvme_set_hotplug", 00:02:55.837 "bdev_nvme_set_options", 00:02:55.837 "bdev_passthru_delete", 00:02:55.837 "bdev_passthru_create", 00:02:55.837 "bdev_lvol_set_parent_bdev", 00:02:55.837 "bdev_lvol_set_parent", 00:02:55.837 "bdev_lvol_check_shallow_copy", 00:02:55.837 "bdev_lvol_start_shallow_copy", 00:02:55.837 "bdev_lvol_grow_lvstore", 00:02:55.837 "bdev_lvol_get_lvols", 00:02:55.837 "bdev_lvol_get_lvstores", 00:02:55.837 "bdev_lvol_delete", 00:02:55.837 "bdev_lvol_set_read_only", 00:02:55.837 "bdev_lvol_resize", 00:02:55.837 "bdev_lvol_decouple_parent", 00:02:55.837 "bdev_lvol_inflate", 00:02:55.837 "bdev_lvol_rename", 00:02:55.837 "bdev_lvol_clone_bdev", 00:02:55.837 "bdev_lvol_clone", 00:02:55.837 "bdev_lvol_snapshot", 00:02:55.837 "bdev_lvol_create", 00:02:55.837 "bdev_lvol_delete_lvstore", 00:02:55.837 "bdev_lvol_rename_lvstore", 00:02:55.837 "bdev_lvol_create_lvstore", 00:02:55.837 "bdev_raid_set_options", 00:02:55.837 "bdev_raid_remove_base_bdev", 00:02:55.837 "bdev_raid_add_base_bdev", 00:02:55.837 "bdev_raid_delete", 00:02:55.837 "bdev_raid_create", 00:02:55.837 "bdev_raid_get_bdevs", 00:02:55.837 "bdev_error_inject_error", 00:02:55.837 "bdev_error_delete", 00:02:55.837 "bdev_error_create", 00:02:55.837 "bdev_split_delete", 00:02:55.837 "bdev_split_create", 00:02:55.837 "bdev_delay_delete", 00:02:55.837 "bdev_delay_create", 00:02:55.837 "bdev_delay_update_latency", 00:02:55.837 "bdev_zone_block_delete", 00:02:55.837 "bdev_zone_block_create", 00:02:55.837 "blobfs_create", 00:02:55.837 "blobfs_detect", 00:02:55.837 "blobfs_set_cache_size", 00:02:55.837 "bdev_aio_delete", 00:02:55.837 "bdev_aio_rescan", 00:02:55.837 "bdev_aio_create", 00:02:55.837 "bdev_ftl_set_property", 00:02:55.837 "bdev_ftl_get_properties", 00:02:55.837 "bdev_ftl_get_stats", 00:02:55.837 "bdev_ftl_unmap", 00:02:55.837 "bdev_ftl_unload", 00:02:55.837 "bdev_ftl_delete", 00:02:55.837 "bdev_ftl_load", 00:02:55.837 "bdev_ftl_create", 00:02:55.837 "bdev_virtio_attach_controller", 00:02:55.837 "bdev_virtio_scsi_get_devices", 00:02:55.837 "bdev_virtio_detach_controller", 00:02:55.837 "bdev_virtio_blk_set_hotplug", 00:02:55.837 "bdev_iscsi_delete", 00:02:55.837 "bdev_iscsi_create", 00:02:55.837 "bdev_iscsi_set_options", 00:02:55.837 "accel_error_inject_error", 00:02:55.837 "ioat_scan_accel_module", 00:02:55.837 "dsa_scan_accel_module", 00:02:55.837 "iaa_scan_accel_module", 00:02:55.837 "vfu_virtio_create_fs_endpoint", 00:02:55.837 "vfu_virtio_create_scsi_endpoint", 00:02:55.837 "vfu_virtio_scsi_remove_target", 00:02:55.837 "vfu_virtio_scsi_add_target", 00:02:55.837 "vfu_virtio_create_blk_endpoint", 00:02:55.837 "vfu_virtio_delete_endpoint", 00:02:55.837 "keyring_file_remove_key", 00:02:55.837 "keyring_file_add_key", 00:02:55.837 "keyring_linux_set_options", 00:02:55.837 "fsdev_aio_delete", 00:02:55.837 "fsdev_aio_create", 00:02:55.837 "iscsi_get_histogram", 00:02:55.837 "iscsi_enable_histogram", 00:02:55.837 "iscsi_set_options", 00:02:55.837 "iscsi_get_auth_groups", 00:02:55.837 "iscsi_auth_group_remove_secret", 00:02:55.837 "iscsi_auth_group_add_secret", 00:02:55.837 "iscsi_delete_auth_group", 00:02:55.837 "iscsi_create_auth_group", 00:02:55.837 "iscsi_set_discovery_auth", 00:02:55.837 "iscsi_get_options", 00:02:55.838 "iscsi_target_node_request_logout", 00:02:55.838 "iscsi_target_node_set_redirect", 00:02:55.838 "iscsi_target_node_set_auth", 00:02:55.838 "iscsi_target_node_add_lun", 00:02:55.838 "iscsi_get_stats", 00:02:55.838 "iscsi_get_connections", 00:02:55.838 "iscsi_portal_group_set_auth", 00:02:55.838 "iscsi_start_portal_group", 00:02:55.838 "iscsi_delete_portal_group", 00:02:55.838 "iscsi_create_portal_group", 00:02:55.838 "iscsi_get_portal_groups", 00:02:55.838 "iscsi_delete_target_node", 00:02:55.838 "iscsi_target_node_remove_pg_ig_maps", 00:02:55.838 "iscsi_target_node_add_pg_ig_maps", 00:02:55.838 "iscsi_create_target_node", 00:02:55.838 "iscsi_get_target_nodes", 00:02:55.838 "iscsi_delete_initiator_group", 00:02:55.838 "iscsi_initiator_group_remove_initiators", 00:02:55.838 "iscsi_initiator_group_add_initiators", 00:02:55.838 "iscsi_create_initiator_group", 00:02:55.838 "iscsi_get_initiator_groups", 00:02:55.838 "nvmf_set_crdt", 00:02:55.838 "nvmf_set_config", 00:02:55.838 "nvmf_set_max_subsystems", 00:02:55.838 "nvmf_stop_mdns_prr", 00:02:55.838 "nvmf_publish_mdns_prr", 00:02:55.838 "nvmf_subsystem_get_listeners", 00:02:55.838 "nvmf_subsystem_get_qpairs", 00:02:55.838 "nvmf_subsystem_get_controllers", 00:02:55.838 "nvmf_get_stats", 00:02:55.838 "nvmf_get_transports", 00:02:55.838 "nvmf_create_transport", 00:02:55.838 "nvmf_get_targets", 00:02:55.838 "nvmf_delete_target", 00:02:55.838 "nvmf_create_target", 00:02:55.838 "nvmf_subsystem_allow_any_host", 00:02:55.838 "nvmf_subsystem_set_keys", 00:02:55.838 "nvmf_subsystem_remove_host", 00:02:55.838 "nvmf_subsystem_add_host", 00:02:55.838 "nvmf_ns_remove_host", 00:02:55.838 "nvmf_ns_add_host", 00:02:55.838 "nvmf_subsystem_remove_ns", 00:02:55.838 "nvmf_subsystem_set_ns_ana_group", 00:02:55.838 "nvmf_subsystem_add_ns", 00:02:55.838 "nvmf_subsystem_listener_set_ana_state", 00:02:55.838 "nvmf_discovery_get_referrals", 00:02:55.838 "nvmf_discovery_remove_referral", 00:02:55.838 "nvmf_discovery_add_referral", 00:02:55.838 "nvmf_subsystem_remove_listener", 00:02:55.838 "nvmf_subsystem_add_listener", 00:02:55.838 "nvmf_delete_subsystem", 00:02:55.838 "nvmf_create_subsystem", 00:02:55.838 "nvmf_get_subsystems", 00:02:55.838 "env_dpdk_get_mem_stats", 00:02:55.838 "nbd_get_disks", 00:02:55.838 "nbd_stop_disk", 00:02:55.838 "nbd_start_disk", 00:02:55.838 "ublk_recover_disk", 00:02:55.838 "ublk_get_disks", 00:02:55.838 "ublk_stop_disk", 00:02:55.838 "ublk_start_disk", 00:02:55.838 "ublk_destroy_target", 00:02:55.838 "ublk_create_target", 00:02:55.838 "virtio_blk_create_transport", 00:02:55.838 "virtio_blk_get_transports", 00:02:55.838 "vhost_controller_set_coalescing", 00:02:55.838 "vhost_get_controllers", 00:02:55.838 "vhost_delete_controller", 00:02:55.838 "vhost_create_blk_controller", 00:02:55.838 "vhost_scsi_controller_remove_target", 00:02:55.838 "vhost_scsi_controller_add_target", 00:02:55.838 "vhost_start_scsi_controller", 00:02:55.838 "vhost_create_scsi_controller", 00:02:55.838 "thread_set_cpumask", 00:02:55.838 "scheduler_set_options", 00:02:55.838 "framework_get_governor", 00:02:55.838 "framework_get_scheduler", 00:02:55.838 "framework_set_scheduler", 00:02:55.838 "framework_get_reactors", 00:02:55.838 "thread_get_io_channels", 00:02:55.838 "thread_get_pollers", 00:02:55.838 "thread_get_stats", 00:02:55.838 "framework_monitor_context_switch", 00:02:55.838 "spdk_kill_instance", 00:02:55.838 "log_enable_timestamps", 00:02:55.838 "log_get_flags", 00:02:55.838 "log_clear_flag", 00:02:55.838 "log_set_flag", 00:02:55.838 "log_get_level", 00:02:55.838 "log_set_level", 00:02:55.838 "log_get_print_level", 00:02:55.838 "log_set_print_level", 00:02:55.838 "framework_enable_cpumask_locks", 00:02:55.838 "framework_disable_cpumask_locks", 00:02:55.838 "framework_wait_init", 00:02:55.838 "framework_start_init", 00:02:55.838 "scsi_get_devices", 00:02:55.838 "bdev_get_histogram", 00:02:55.838 "bdev_enable_histogram", 00:02:55.838 "bdev_set_qos_limit", 00:02:55.838 "bdev_set_qd_sampling_period", 00:02:55.838 "bdev_get_bdevs", 00:02:55.838 "bdev_reset_iostat", 00:02:55.838 "bdev_get_iostat", 00:02:55.838 "bdev_examine", 00:02:55.838 "bdev_wait_for_examine", 00:02:55.838 "bdev_set_options", 00:02:55.838 "accel_get_stats", 00:02:55.838 "accel_set_options", 00:02:55.838 "accel_set_driver", 00:02:55.838 "accel_crypto_key_destroy", 00:02:55.838 "accel_crypto_keys_get", 00:02:55.838 "accel_crypto_key_create", 00:02:55.838 "accel_assign_opc", 00:02:55.838 "accel_get_module_info", 00:02:55.838 "accel_get_opc_assignments", 00:02:55.838 "vmd_rescan", 00:02:55.838 "vmd_remove_device", 00:02:55.838 "vmd_enable", 00:02:55.838 "sock_get_default_impl", 00:02:55.838 "sock_set_default_impl", 00:02:55.838 "sock_impl_set_options", 00:02:55.838 "sock_impl_get_options", 00:02:55.838 "iobuf_get_stats", 00:02:55.838 "iobuf_set_options", 00:02:55.838 "keyring_get_keys", 00:02:55.838 "vfu_tgt_set_base_path", 00:02:55.838 "framework_get_pci_devices", 00:02:55.838 "framework_get_config", 00:02:55.838 "framework_get_subsystems", 00:02:55.838 "fsdev_set_opts", 00:02:55.838 "fsdev_get_opts", 00:02:55.838 "trace_get_info", 00:02:55.838 "trace_get_tpoint_group_mask", 00:02:55.838 "trace_disable_tpoint_group", 00:02:55.838 "trace_enable_tpoint_group", 00:02:55.838 "trace_clear_tpoint_mask", 00:02:55.838 "trace_set_tpoint_mask", 00:02:55.838 "notify_get_notifications", 00:02:55.838 "notify_get_types", 00:02:55.838 "spdk_get_version", 00:02:55.838 "rpc_get_methods" 00:02:55.838 ] 00:02:55.838 19:09:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:02:55.838 19:09:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:02:55.838 19:09:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3461557 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3461557 ']' 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3461557 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461557 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461557' 00:02:55.838 killing process with pid 3461557 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3461557 00:02:55.838 19:09:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3461557 00:02:56.098 00:02:56.098 real 0m1.354s 00:02:56.098 user 0m2.570s 00:02:56.098 sys 0m0.350s 00:02:56.098 19:09:29 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:56.098 19:09:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:02:56.098 ************************************ 00:02:56.098 END TEST spdkcli_tcp 00:02:56.098 ************************************ 00:02:56.098 19:09:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:02:56.098 19:09:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:56.098 19:09:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:56.098 19:09:29 -- common/autotest_common.sh@10 -- # set +x 00:02:56.098 ************************************ 00:02:56.098 START TEST dpdk_mem_utility 00:02:56.098 ************************************ 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:02:56.098 * Looking for test storage... 00:02:56.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:56.098 19:09:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:56.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.098 --rc genhtml_branch_coverage=1 00:02:56.098 --rc genhtml_function_coverage=1 00:02:56.098 --rc genhtml_legend=1 00:02:56.098 --rc geninfo_all_blocks=1 00:02:56.098 --rc geninfo_unexecuted_blocks=1 00:02:56.098 00:02:56.098 ' 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:56.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.098 --rc genhtml_branch_coverage=1 00:02:56.098 --rc genhtml_function_coverage=1 00:02:56.098 --rc genhtml_legend=1 00:02:56.098 --rc geninfo_all_blocks=1 00:02:56.098 --rc geninfo_unexecuted_blocks=1 00:02:56.098 00:02:56.098 ' 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:56.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.098 --rc genhtml_branch_coverage=1 00:02:56.098 --rc genhtml_function_coverage=1 00:02:56.098 --rc genhtml_legend=1 00:02:56.098 --rc geninfo_all_blocks=1 00:02:56.098 --rc geninfo_unexecuted_blocks=1 00:02:56.098 00:02:56.098 ' 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:56.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:56.098 --rc genhtml_branch_coverage=1 00:02:56.098 --rc genhtml_function_coverage=1 00:02:56.098 --rc genhtml_legend=1 00:02:56.098 --rc geninfo_all_blocks=1 00:02:56.098 --rc geninfo_unexecuted_blocks=1 00:02:56.098 00:02:56.098 ' 00:02:56.098 19:09:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:02:56.098 19:09:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3461964 00:02:56.098 19:09:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3461964 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3461964 ']' 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:02:56.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:02:56.098 19:09:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:02:56.098 19:09:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:02:56.357 [2024-11-26 19:09:29.981306] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:56.357 [2024-11-26 19:09:29.981370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3461964 ] 00:02:56.357 [2024-11-26 19:09:30.049452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:56.357 [2024-11-26 19:09:30.079684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:56.616 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:02:56.616 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:02:56.616 19:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:02:56.616 19:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:02:56.616 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:02:56.616 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:02:56.616 { 00:02:56.616 "filename": "/tmp/spdk_mem_dump.txt" 00:02:56.616 } 00:02:56.616 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:02:56.616 19:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:02:56.616 DPDK memory size 818.000000 MiB in 1 heap(s) 00:02:56.616 1 heaps totaling size 818.000000 MiB 00:02:56.616 size: 818.000000 MiB heap id: 0 00:02:56.616 end heaps---------- 00:02:56.616 9 mempools totaling size 603.782043 MiB 00:02:56.616 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:02:56.616 size: 158.602051 MiB name: PDU_data_out_Pool 00:02:56.616 size: 100.555481 MiB name: bdev_io_3461964 00:02:56.616 size: 50.003479 MiB name: msgpool_3461964 00:02:56.616 size: 36.509338 MiB name: fsdev_io_3461964 00:02:56.616 size: 21.763794 MiB name: PDU_Pool 00:02:56.616 size: 19.513306 MiB name: SCSI_TASK_Pool 00:02:56.616 size: 4.133484 MiB name: evtpool_3461964 00:02:56.617 size: 0.026123 MiB name: Session_Pool 00:02:56.617 end mempools------- 00:02:56.617 6 memzones totaling size 4.142822 MiB 00:02:56.617 size: 1.000366 MiB name: RG_ring_0_3461964 00:02:56.617 size: 1.000366 MiB name: RG_ring_1_3461964 00:02:56.617 size: 1.000366 MiB name: RG_ring_4_3461964 00:02:56.617 size: 1.000366 MiB name: RG_ring_5_3461964 00:02:56.617 size: 0.125366 MiB name: RG_ring_2_3461964 00:02:56.617 size: 0.015991 MiB name: RG_ring_3_3461964 00:02:56.617 end memzones------- 00:02:56.617 19:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:02:56.617 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:02:56.617 list of free elements. size: 10.852478 MiB 00:02:56.617 element at address: 0x200019200000 with size: 0.999878 MiB 00:02:56.617 element at address: 0x200019400000 with size: 0.999878 MiB 00:02:56.617 element at address: 0x200000400000 with size: 0.998535 MiB 00:02:56.617 element at address: 0x200032000000 with size: 0.994446 MiB 00:02:56.617 element at address: 0x200006400000 with size: 0.959839 MiB 00:02:56.617 element at address: 0x200012c00000 with size: 0.944275 MiB 00:02:56.617 element at address: 0x200019600000 with size: 0.936584 MiB 00:02:56.617 element at address: 0x200000200000 with size: 0.717346 MiB 00:02:56.617 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:02:56.617 element at address: 0x200000c00000 with size: 0.495422 MiB 00:02:56.617 element at address: 0x20000a600000 with size: 0.490723 MiB 00:02:56.617 element at address: 0x200019800000 with size: 0.485657 MiB 00:02:56.617 element at address: 0x200003e00000 with size: 0.481934 MiB 00:02:56.617 element at address: 0x200028200000 with size: 0.410034 MiB 00:02:56.617 element at address: 0x200000800000 with size: 0.355042 MiB 00:02:56.617 list of standard malloc elements. size: 199.218628 MiB 00:02:56.617 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:02:56.617 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:02:56.617 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:02:56.617 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:02:56.617 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:02:56.617 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:02:56.617 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:02:56.617 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:02:56.617 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:02:56.617 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20000085b040 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20000085f300 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20000087f680 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200000cff000 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200003efb980 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:02:56.617 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200028268f80 with size: 0.000183 MiB 00:02:56.617 element at address: 0x200028269040 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:02:56.617 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:02:56.617 list of memzone associated elements. size: 607.928894 MiB 00:02:56.617 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:02:56.617 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:02:56.617 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:02:56.617 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:02:56.617 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:02:56.617 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_3461964_0 00:02:56.617 element at address: 0x200000dff380 with size: 48.003052 MiB 00:02:56.617 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3461964_0 00:02:56.617 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:02:56.617 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3461964_0 00:02:56.617 element at address: 0x2000199be940 with size: 20.255554 MiB 00:02:56.617 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:02:56.617 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:02:56.617 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:02:56.617 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:02:56.617 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3461964_0 00:02:56.617 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:02:56.617 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3461964 00:02:56.617 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:02:56.617 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3461964 00:02:56.617 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:02:56.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:02:56.617 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:02:56.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:02:56.617 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:02:56.617 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:02:56.617 element at address: 0x200003efba40 with size: 1.008118 MiB 00:02:56.617 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:02:56.617 element at address: 0x200000cff180 with size: 1.000488 MiB 00:02:56.617 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3461964 00:02:56.617 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:02:56.617 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3461964 00:02:56.617 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:02:56.617 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3461964 00:02:56.617 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:02:56.617 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3461964 00:02:56.617 element at address: 0x20000087f740 with size: 0.500488 MiB 00:02:56.617 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3461964 00:02:56.617 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:02:56.617 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3461964 00:02:56.617 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:02:56.617 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:02:56.617 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:02:56.617 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:02:56.617 element at address: 0x20001987c540 with size: 0.250488 MiB 00:02:56.617 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:02:56.617 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:02:56.617 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3461964 00:02:56.617 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:02:56.617 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3461964 00:02:56.617 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:02:56.617 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:02:56.617 element at address: 0x200028269100 with size: 0.023743 MiB 00:02:56.617 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:02:56.617 element at address: 0x20000085b100 with size: 0.016113 MiB 00:02:56.617 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3461964 00:02:56.617 element at address: 0x20002826f240 with size: 0.002441 MiB 00:02:56.617 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:02:56.617 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:02:56.617 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3461964 00:02:56.617 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:02:56.617 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3461964 00:02:56.617 element at address: 0x20000085af00 with size: 0.000305 MiB 00:02:56.617 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3461964 00:02:56.617 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:02:56.617 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:02:56.617 19:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:02:56.617 19:09:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3461964 00:02:56.617 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3461964 ']' 00:02:56.617 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3461964 00:02:56.617 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:02:56.617 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:02:56.618 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3461964 00:02:56.618 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:02:56.618 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:02:56.618 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3461964' 00:02:56.618 killing process with pid 3461964 00:02:56.618 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3461964 00:02:56.618 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3461964 00:02:56.877 00:02:56.877 real 0m0.759s 00:02:56.877 user 0m0.735s 00:02:56.877 sys 0m0.293s 00:02:56.877 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:56.877 19:09:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:02:56.877 ************************************ 00:02:56.877 END TEST dpdk_mem_utility 00:02:56.877 ************************************ 00:02:56.877 19:09:30 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:02:56.877 19:09:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:02:56.877 19:09:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:56.877 19:09:30 -- common/autotest_common.sh@10 -- # set +x 00:02:56.877 ************************************ 00:02:56.877 START TEST event 00:02:56.877 ************************************ 00:02:56.877 19:09:30 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:02:56.877 * Looking for test storage... 00:02:56.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:02:56.877 19:09:30 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:56.877 19:09:30 event -- common/autotest_common.sh@1693 -- # lcov --version 00:02:56.877 19:09:30 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:56.877 19:09:30 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:56.877 19:09:30 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:56.877 19:09:30 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:56.877 19:09:30 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:56.877 19:09:30 event -- scripts/common.sh@336 -- # IFS=.-: 00:02:56.877 19:09:30 event -- scripts/common.sh@336 -- # read -ra ver1 00:02:56.877 19:09:30 event -- scripts/common.sh@337 -- # IFS=.-: 00:02:56.877 19:09:30 event -- scripts/common.sh@337 -- # read -ra ver2 00:02:56.877 19:09:30 event -- scripts/common.sh@338 -- # local 'op=<' 00:02:56.877 19:09:30 event -- scripts/common.sh@340 -- # ver1_l=2 00:02:56.877 19:09:30 event -- scripts/common.sh@341 -- # ver2_l=1 00:02:56.877 19:09:30 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:56.877 19:09:30 event -- scripts/common.sh@344 -- # case "$op" in 00:02:56.877 19:09:30 event -- scripts/common.sh@345 -- # : 1 00:02:56.877 19:09:30 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:56.877 19:09:30 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:56.877 19:09:30 event -- scripts/common.sh@365 -- # decimal 1 00:02:56.877 19:09:30 event -- scripts/common.sh@353 -- # local d=1 00:02:56.877 19:09:30 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:56.877 19:09:30 event -- scripts/common.sh@355 -- # echo 1 00:02:56.877 19:09:30 event -- scripts/common.sh@365 -- # ver1[v]=1 00:02:57.138 19:09:30 event -- scripts/common.sh@366 -- # decimal 2 00:02:57.138 19:09:30 event -- scripts/common.sh@353 -- # local d=2 00:02:57.138 19:09:30 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:57.138 19:09:30 event -- scripts/common.sh@355 -- # echo 2 00:02:57.138 19:09:30 event -- scripts/common.sh@366 -- # ver2[v]=2 00:02:57.138 19:09:30 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:57.138 19:09:30 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:57.138 19:09:30 event -- scripts/common.sh@368 -- # return 0 00:02:57.138 19:09:30 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:57.138 19:09:30 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:57.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.138 --rc genhtml_branch_coverage=1 00:02:57.138 --rc genhtml_function_coverage=1 00:02:57.138 --rc genhtml_legend=1 00:02:57.138 --rc geninfo_all_blocks=1 00:02:57.138 --rc geninfo_unexecuted_blocks=1 00:02:57.138 00:02:57.138 ' 00:02:57.138 19:09:30 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:57.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.138 --rc genhtml_branch_coverage=1 00:02:57.138 --rc genhtml_function_coverage=1 00:02:57.138 --rc genhtml_legend=1 00:02:57.138 --rc geninfo_all_blocks=1 00:02:57.138 --rc geninfo_unexecuted_blocks=1 00:02:57.138 00:02:57.138 ' 00:02:57.138 19:09:30 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:57.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.138 --rc genhtml_branch_coverage=1 00:02:57.138 --rc genhtml_function_coverage=1 00:02:57.138 --rc genhtml_legend=1 00:02:57.138 --rc geninfo_all_blocks=1 00:02:57.138 --rc geninfo_unexecuted_blocks=1 00:02:57.138 00:02:57.138 ' 00:02:57.138 19:09:30 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:57.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:57.138 --rc genhtml_branch_coverage=1 00:02:57.138 --rc genhtml_function_coverage=1 00:02:57.138 --rc genhtml_legend=1 00:02:57.138 --rc geninfo_all_blocks=1 00:02:57.138 --rc geninfo_unexecuted_blocks=1 00:02:57.138 00:02:57.138 ' 00:02:57.138 19:09:30 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:02:57.138 19:09:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:02:57.138 19:09:30 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:02:57.138 19:09:30 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:02:57.138 19:09:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:57.138 19:09:30 event -- common/autotest_common.sh@10 -- # set +x 00:02:57.138 ************************************ 00:02:57.138 START TEST event_perf 00:02:57.138 ************************************ 00:02:57.138 19:09:30 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:02:57.138 Running I/O for 1 seconds...[2024-11-26 19:09:30.780043] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:57.138 [2024-11-26 19:09:30.780090] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462355 ] 00:02:57.139 [2024-11-26 19:09:30.844970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:02:57.139 [2024-11-26 19:09:30.878668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:02:57.139 [2024-11-26 19:09:30.878817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:02:57.139 [2024-11-26 19:09:30.878965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:57.139 Running I/O for 1 seconds...[2024-11-26 19:09:30.878966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:02:58.100 00:02:58.100 lcore 0: 186029 00:02:58.100 lcore 1: 186030 00:02:58.100 lcore 2: 186027 00:02:58.100 lcore 3: 186028 00:02:58.100 done. 00:02:58.100 00:02:58.100 real 0m1.136s 00:02:58.100 user 0m4.073s 00:02:58.100 sys 0m0.061s 00:02:58.100 19:09:31 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:58.100 19:09:31 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:02:58.100 ************************************ 00:02:58.100 END TEST event_perf 00:02:58.100 ************************************ 00:02:58.100 19:09:31 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:02:58.100 19:09:31 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:02:58.100 19:09:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:58.100 19:09:31 event -- common/autotest_common.sh@10 -- # set +x 00:02:58.100 ************************************ 00:02:58.100 START TEST event_reactor 00:02:58.100 ************************************ 00:02:58.100 19:09:31 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:02:58.100 [2024-11-26 19:09:31.962517] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:58.100 [2024-11-26 19:09:31.962564] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462640 ] 00:02:58.358 [2024-11-26 19:09:32.028412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:58.358 [2024-11-26 19:09:32.057649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:02:59.293 test_start 00:02:59.293 oneshot 00:02:59.293 tick 100 00:02:59.293 tick 100 00:02:59.293 tick 250 00:02:59.293 tick 100 00:02:59.293 tick 100 00:02:59.293 tick 100 00:02:59.293 tick 250 00:02:59.293 tick 500 00:02:59.293 tick 100 00:02:59.293 tick 100 00:02:59.293 tick 250 00:02:59.293 tick 100 00:02:59.293 tick 100 00:02:59.293 test_end 00:02:59.293 00:02:59.293 real 0m1.131s 00:02:59.293 user 0m1.074s 00:02:59.293 sys 0m0.054s 00:02:59.293 19:09:33 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:02:59.293 19:09:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:02:59.293 ************************************ 00:02:59.293 END TEST event_reactor 00:02:59.293 ************************************ 00:02:59.293 19:09:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:02:59.293 19:09:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:02:59.293 19:09:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:02:59.293 19:09:33 event -- common/autotest_common.sh@10 -- # set +x 00:02:59.293 ************************************ 00:02:59.293 START TEST event_reactor_perf 00:02:59.293 ************************************ 00:02:59.293 19:09:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:02:59.293 [2024-11-26 19:09:33.137914] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:02:59.293 [2024-11-26 19:09:33.137960] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3462765 ] 00:02:59.551 [2024-11-26 19:09:33.202527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:02:59.551 [2024-11-26 19:09:33.232751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:00.485 test_start 00:03:00.485 test_end 00:03:00.485 Performance: 541774 events per second 00:03:00.485 00:03:00.485 real 0m1.130s 00:03:00.485 user 0m1.066s 00:03:00.485 sys 0m0.061s 00:03:00.485 19:09:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:00.485 19:09:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:00.485 ************************************ 00:03:00.485 END TEST event_reactor_perf 00:03:00.485 ************************************ 00:03:00.485 19:09:34 event -- event/event.sh@49 -- # uname -s 00:03:00.485 19:09:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:00.485 19:09:34 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:00.485 19:09:34 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:00.485 19:09:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:00.485 19:09:34 event -- common/autotest_common.sh@10 -- # set +x 00:03:00.485 ************************************ 00:03:00.485 START TEST event_scheduler 00:03:00.485 ************************************ 00:03:00.485 19:09:34 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:00.745 * Looking for test storage... 00:03:00.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:00.745 19:09:34 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:00.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.745 --rc genhtml_branch_coverage=1 00:03:00.745 --rc genhtml_function_coverage=1 00:03:00.745 --rc genhtml_legend=1 00:03:00.745 --rc geninfo_all_blocks=1 00:03:00.745 --rc geninfo_unexecuted_blocks=1 00:03:00.745 00:03:00.745 ' 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:00.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.745 --rc genhtml_branch_coverage=1 00:03:00.745 --rc genhtml_function_coverage=1 00:03:00.745 --rc genhtml_legend=1 00:03:00.745 --rc geninfo_all_blocks=1 00:03:00.745 --rc geninfo_unexecuted_blocks=1 00:03:00.745 00:03:00.745 ' 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:00.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.745 --rc genhtml_branch_coverage=1 00:03:00.745 --rc genhtml_function_coverage=1 00:03:00.745 --rc genhtml_legend=1 00:03:00.745 --rc geninfo_all_blocks=1 00:03:00.745 --rc geninfo_unexecuted_blocks=1 00:03:00.745 00:03:00.745 ' 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:00.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:00.745 --rc genhtml_branch_coverage=1 00:03:00.745 --rc genhtml_function_coverage=1 00:03:00.745 --rc genhtml_legend=1 00:03:00.745 --rc geninfo_all_blocks=1 00:03:00.745 --rc geninfo_unexecuted_blocks=1 00:03:00.745 00:03:00.745 ' 00:03:00.745 19:09:34 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:00.745 19:09:34 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3463132 00:03:00.745 19:09:34 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:00.745 19:09:34 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3463132 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3463132 ']' 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:00.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:00.745 19:09:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:00.745 19:09:34 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:00.745 [2024-11-26 19:09:34.466442] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:00.745 [2024-11-26 19:09:34.466508] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463132 ] 00:03:00.745 [2024-11-26 19:09:34.552852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:00.745 [2024-11-26 19:09:34.607980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:00.745 [2024-11-26 19:09:34.608162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:00.745 [2024-11-26 19:09:34.608265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:00.745 [2024-11-26 19:09:34.608267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:01.682 19:09:35 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:01.682 19:09:35 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:03:01.682 19:09:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:01.682 19:09:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.682 19:09:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:01.682 [2024-11-26 19:09:35.262719] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:01.683 [2024-11-26 19:09:35.262734] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:01.683 [2024-11-26 19:09:35.262742] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:01.683 [2024-11-26 19:09:35.262747] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:01.683 [2024-11-26 19:09:35.262751] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:01.683 19:09:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:01.683 19:09:35 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 [2024-11-26 19:09:35.319529] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:01.683 19:09:35 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:01.683 19:09:35 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:01.683 19:09:35 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 ************************************ 00:03:01.683 START TEST scheduler_create_thread 00:03:01.683 ************************************ 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 2 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 3 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 4 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 5 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 6 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 7 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 8 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 9 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 10 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:01.683 19:09:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:03.063 19:09:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:03.063 00:03:03.063 real 0m1.171s 00:03:03.063 user 0m0.011s 00:03:03.063 sys 0m0.006s 00:03:03.063 19:09:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:03.063 19:09:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:03.063 ************************************ 00:03:03.063 END TEST scheduler_create_thread 00:03:03.063 ************************************ 00:03:03.063 19:09:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:03:03.063 19:09:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3463132 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3463132 ']' 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3463132 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3463132 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3463132' 00:03:03.063 killing process with pid 3463132 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3463132 00:03:03.063 19:09:36 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3463132 00:03:03.322 [2024-11-26 19:09:36.996711] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:03:03.322 00:03:03.322 real 0m2.783s 00:03:03.322 user 0m4.941s 00:03:03.322 sys 0m0.316s 00:03:03.322 19:09:37 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:03.322 19:09:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:03.322 ************************************ 00:03:03.322 END TEST event_scheduler 00:03:03.322 ************************************ 00:03:03.322 19:09:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:03:03.322 19:09:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:03:03.322 19:09:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:03.322 19:09:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:03.322 19:09:37 event -- common/autotest_common.sh@10 -- # set +x 00:03:03.322 ************************************ 00:03:03.322 START TEST app_repeat 00:03:03.322 ************************************ 00:03:03.322 19:09:37 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3463849 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3463849' 00:03:03.322 Process app_repeat pid: 3463849 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:03:03.322 spdk_app_start Round 0 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3463849 /var/tmp/spdk-nbd.sock 00:03:03.322 19:09:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3463849 ']' 00:03:03.322 19:09:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:03.322 19:09:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:03.322 19:09:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:03.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:03.322 19:09:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:03.322 19:09:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:03.322 19:09:37 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:03:03.322 [2024-11-26 19:09:37.157690] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:03.322 [2024-11-26 19:09:37.157736] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3463849 ] 00:03:03.581 [2024-11-26 19:09:37.224410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:03.581 [2024-11-26 19:09:37.256639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:03.581 [2024-11-26 19:09:37.256641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:03.581 19:09:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:03.581 19:09:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:03.581 19:09:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:03.840 Malloc0 00:03:03.840 19:09:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:03.840 Malloc1 00:03:03.840 19:09:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:03.840 19:09:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:04.099 /dev/nbd0 00:03:04.099 19:09:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:04.099 19:09:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:04.099 1+0 records in 00:03:04.099 1+0 records out 00:03:04.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178634 s, 22.9 MB/s 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:04.099 19:09:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:04.100 19:09:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:04.100 19:09:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:04.100 19:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:04.100 19:09:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:04.100 19:09:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:04.359 /dev/nbd1 00:03:04.359 19:09:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:04.359 19:09:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:04.359 1+0 records in 00:03:04.359 1+0 records out 00:03:04.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276113 s, 14.8 MB/s 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:04.359 19:09:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:04.359 19:09:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:04.359 19:09:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:04.359 19:09:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:04.359 19:09:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:04.359 19:09:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:04.618 { 00:03:04.618 "nbd_device": "/dev/nbd0", 00:03:04.618 "bdev_name": "Malloc0" 00:03:04.618 }, 00:03:04.618 { 00:03:04.618 "nbd_device": "/dev/nbd1", 00:03:04.618 "bdev_name": "Malloc1" 00:03:04.618 } 00:03:04.618 ]' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:04.618 { 00:03:04.618 "nbd_device": "/dev/nbd0", 00:03:04.618 "bdev_name": "Malloc0" 00:03:04.618 }, 00:03:04.618 { 00:03:04.618 "nbd_device": "/dev/nbd1", 00:03:04.618 "bdev_name": "Malloc1" 00:03:04.618 } 00:03:04.618 ]' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:04.618 /dev/nbd1' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:04.618 /dev/nbd1' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:04.618 256+0 records in 00:03:04.618 256+0 records out 00:03:04.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043217 s, 243 MB/s 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:04.618 256+0 records in 00:03:04.618 256+0 records out 00:03:04.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115585 s, 90.7 MB/s 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:04.618 256+0 records in 00:03:04.618 256+0 records out 00:03:04.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124031 s, 84.5 MB/s 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:04.618 19:09:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:04.619 19:09:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:04.877 19:09:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:05.136 19:09:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:05.136 19:09:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:05.395 19:09:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:05.395 [2024-11-26 19:09:39.113908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:05.395 [2024-11-26 19:09:39.143403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:05.395 [2024-11-26 19:09:39.143405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:05.395 [2024-11-26 19:09:39.172666] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:05.395 [2024-11-26 19:09:39.172698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:08.682 19:09:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:08.682 19:09:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:03:08.682 spdk_app_start Round 1 00:03:08.682 19:09:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3463849 /var/tmp/spdk-nbd.sock 00:03:08.682 19:09:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3463849 ']' 00:03:08.682 19:09:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:08.682 19:09:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:08.682 19:09:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:08.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:08.682 19:09:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:08.682 19:09:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:08.682 19:09:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:08.682 19:09:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:08.682 19:09:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:08.682 Malloc0 00:03:08.682 19:09:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:08.682 Malloc1 00:03:08.682 19:09:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:08.682 19:09:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:08.941 /dev/nbd0 00:03:08.941 19:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:08.941 19:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:08.941 1+0 records in 00:03:08.941 1+0 records out 00:03:08.941 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203323 s, 20.1 MB/s 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:08.941 19:09:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:08.941 19:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:08.941 19:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:08.941 19:09:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:09.200 /dev/nbd1 00:03:09.200 19:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:09.200 19:09:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:09.200 19:09:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:09.200 19:09:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:09.200 19:09:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:09.200 19:09:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:09.200 19:09:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:09.201 1+0 records in 00:03:09.201 1+0 records out 00:03:09.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018813 s, 21.8 MB/s 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:09.201 19:09:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:09.201 19:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:09.201 19:09:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:09.201 19:09:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:09.201 19:09:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:09.201 19:09:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:09.201 19:09:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:09.201 { 00:03:09.201 "nbd_device": "/dev/nbd0", 00:03:09.201 "bdev_name": "Malloc0" 00:03:09.201 }, 00:03:09.201 { 00:03:09.201 "nbd_device": "/dev/nbd1", 00:03:09.201 "bdev_name": "Malloc1" 00:03:09.201 } 00:03:09.201 ]' 00:03:09.201 19:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:09.201 19:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:09.201 { 00:03:09.201 "nbd_device": "/dev/nbd0", 00:03:09.201 "bdev_name": "Malloc0" 00:03:09.201 }, 00:03:09.201 { 00:03:09.201 "nbd_device": "/dev/nbd1", 00:03:09.201 "bdev_name": "Malloc1" 00:03:09.201 } 00:03:09.201 ]' 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:09.461 /dev/nbd1' 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:09.461 /dev/nbd1' 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:09.461 256+0 records in 00:03:09.461 256+0 records out 00:03:09.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00431638 s, 243 MB/s 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:09.461 256+0 records in 00:03:09.461 256+0 records out 00:03:09.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124811 s, 84.0 MB/s 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:09.461 256+0 records in 00:03:09.461 256+0 records out 00:03:09.461 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123746 s, 84.7 MB/s 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:09.461 19:09:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:09.721 19:09:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:09.981 19:09:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:09.981 19:09:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:09.981 19:09:43 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:10.239 [2024-11-26 19:09:43.937713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:10.239 [2024-11-26 19:09:43.966477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:10.239 [2024-11-26 19:09:43.966480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:10.239 [2024-11-26 19:09:43.996164] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:10.239 [2024-11-26 19:09:43.996196] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:13.525 19:09:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:03:13.525 19:09:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:03:13.525 spdk_app_start Round 2 00:03:13.525 19:09:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3463849 /var/tmp/spdk-nbd.sock 00:03:13.525 19:09:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3463849 ']' 00:03:13.525 19:09:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:13.525 19:09:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:13.525 19:09:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:13.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:13.525 19:09:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:13.525 19:09:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:13.525 19:09:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:13.525 19:09:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:13.525 19:09:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:13.525 Malloc0 00:03:13.525 19:09:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:03:13.525 Malloc1 00:03:13.525 19:09:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:13.525 19:09:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:03:13.784 /dev/nbd0 00:03:13.784 19:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:03:13.784 19:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:13.784 1+0 records in 00:03:13.784 1+0 records out 00:03:13.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177299 s, 23.1 MB/s 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:13.784 19:09:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:13.784 19:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:13.784 19:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:13.784 19:09:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:03:14.043 /dev/nbd1 00:03:14.043 19:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:03:14.044 1+0 records in 00:03:14.044 1+0 records out 00:03:14.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000106626 s, 38.4 MB/s 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:03:14.044 19:09:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:03:14.044 { 00:03:14.044 "nbd_device": "/dev/nbd0", 00:03:14.044 "bdev_name": "Malloc0" 00:03:14.044 }, 00:03:14.044 { 00:03:14.044 "nbd_device": "/dev/nbd1", 00:03:14.044 "bdev_name": "Malloc1" 00:03:14.044 } 00:03:14.044 ]' 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:14.044 19:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:03:14.044 { 00:03:14.044 "nbd_device": "/dev/nbd0", 00:03:14.044 "bdev_name": "Malloc0" 00:03:14.044 }, 00:03:14.044 { 00:03:14.044 "nbd_device": "/dev/nbd1", 00:03:14.044 "bdev_name": "Malloc1" 00:03:14.044 } 00:03:14.044 ]' 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:03:14.303 /dev/nbd1' 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:03:14.303 /dev/nbd1' 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:03:14.303 256+0 records in 00:03:14.303 256+0 records out 00:03:14.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00299167 s, 350 MB/s 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:03:14.303 256+0 records in 00:03:14.303 256+0 records out 00:03:14.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011965 s, 87.6 MB/s 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:03:14.303 256+0 records in 00:03:14.303 256+0 records out 00:03:14.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151189 s, 69.4 MB/s 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:14.303 19:09:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:03:14.303 19:09:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:03:14.563 19:09:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:03:14.564 19:09:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:03:14.822 19:09:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:03:14.822 19:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:03:14.822 19:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:03:14.822 19:09:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:03:14.823 19:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:03:14.823 19:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:03:14.823 19:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:03:14.823 19:09:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:03:14.823 19:09:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:03:14.823 19:09:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:03:14.823 19:09:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:03:14.823 19:09:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:03:14.823 19:09:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:03:14.823 19:09:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:03:15.081 [2024-11-26 19:09:48.782845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:15.081 [2024-11-26 19:09:48.812142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:15.081 [2024-11-26 19:09:48.812163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:15.081 [2024-11-26 19:09:48.841383] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:03:15.081 [2024-11-26 19:09:48.841418] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:03:18.371 19:09:51 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3463849 /var/tmp/spdk-nbd.sock 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3463849 ']' 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:03:18.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:03:18.371 19:09:51 event.app_repeat -- event/event.sh@39 -- # killprocess 3463849 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3463849 ']' 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3463849 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3463849 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3463849' 00:03:18.371 killing process with pid 3463849 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3463849 00:03:18.371 19:09:51 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3463849 00:03:18.371 spdk_app_start is called in Round 0. 00:03:18.371 Shutdown signal received, stop current app iteration 00:03:18.371 Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 reinitialization... 00:03:18.372 spdk_app_start is called in Round 1. 00:03:18.372 Shutdown signal received, stop current app iteration 00:03:18.372 Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 reinitialization... 00:03:18.372 spdk_app_start is called in Round 2. 00:03:18.372 Shutdown signal received, stop current app iteration 00:03:18.372 Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 reinitialization... 00:03:18.372 spdk_app_start is called in Round 3. 00:03:18.372 Shutdown signal received, stop current app iteration 00:03:18.372 19:09:51 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:03:18.372 19:09:51 event.app_repeat -- event/event.sh@42 -- # return 0 00:03:18.372 00:03:18.372 real 0m14.840s 00:03:18.372 user 0m32.266s 00:03:18.372 sys 0m1.916s 00:03:18.372 19:09:51 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:18.372 19:09:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:03:18.372 ************************************ 00:03:18.372 END TEST app_repeat 00:03:18.372 ************************************ 00:03:18.372 19:09:52 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:03:18.372 19:09:52 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:03:18.372 19:09:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.372 19:09:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.372 19:09:52 event -- common/autotest_common.sh@10 -- # set +x 00:03:18.372 ************************************ 00:03:18.372 START TEST cpu_locks 00:03:18.372 ************************************ 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:03:18.372 * Looking for test storage... 00:03:18.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:18.372 19:09:52 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:18.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.372 --rc genhtml_branch_coverage=1 00:03:18.372 --rc genhtml_function_coverage=1 00:03:18.372 --rc genhtml_legend=1 00:03:18.372 --rc geninfo_all_blocks=1 00:03:18.372 --rc geninfo_unexecuted_blocks=1 00:03:18.372 00:03:18.372 ' 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:18.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.372 --rc genhtml_branch_coverage=1 00:03:18.372 --rc genhtml_function_coverage=1 00:03:18.372 --rc genhtml_legend=1 00:03:18.372 --rc geninfo_all_blocks=1 00:03:18.372 --rc geninfo_unexecuted_blocks=1 00:03:18.372 00:03:18.372 ' 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:18.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.372 --rc genhtml_branch_coverage=1 00:03:18.372 --rc genhtml_function_coverage=1 00:03:18.372 --rc genhtml_legend=1 00:03:18.372 --rc geninfo_all_blocks=1 00:03:18.372 --rc geninfo_unexecuted_blocks=1 00:03:18.372 00:03:18.372 ' 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:18.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.372 --rc genhtml_branch_coverage=1 00:03:18.372 --rc genhtml_function_coverage=1 00:03:18.372 --rc genhtml_legend=1 00:03:18.372 --rc geninfo_all_blocks=1 00:03:18.372 --rc geninfo_unexecuted_blocks=1 00:03:18.372 00:03:18.372 ' 00:03:18.372 19:09:52 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:03:18.372 19:09:52 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:03:18.372 19:09:52 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:03:18.372 19:09:52 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:18.372 19:09:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:18.372 ************************************ 00:03:18.372 START TEST default_locks 00:03:18.372 ************************************ 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3467422 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3467422 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3467422 ']' 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:18.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:18.372 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:03:18.372 [2024-11-26 19:09:52.210923] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:18.372 [2024-11-26 19:09:52.210972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467422 ] 00:03:18.632 [2024-11-26 19:09:52.275446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:18.632 [2024-11-26 19:09:52.305127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:18.632 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:18.632 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:03:18.632 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3467422 00:03:18.632 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3467422 00:03:18.632 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:18.891 lslocks: write error 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3467422 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3467422 ']' 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3467422 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3467422 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3467422' 00:03:18.891 killing process with pid 3467422 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3467422 00:03:18.891 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3467422 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3467422 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3467422 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3467422 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3467422 ']' 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:19.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:03:19.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3467422) - No such process 00:03:19.151 ERROR: process (pid: 3467422) is no longer running 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:03:19.151 00:03:19.151 real 0m0.673s 00:03:19.151 user 0m0.635s 00:03:19.151 sys 0m0.349s 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:19.151 19:09:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:03:19.151 ************************************ 00:03:19.151 END TEST default_locks 00:03:19.151 ************************************ 00:03:19.151 19:09:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:03:19.151 19:09:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:19.151 19:09:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:19.151 19:09:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:19.151 ************************************ 00:03:19.151 START TEST default_locks_via_rpc 00:03:19.151 ************************************ 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3467672 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3467672 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3467672 ']' 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:19.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:19.151 19:09:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:19.151 [2024-11-26 19:09:52.923904] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:19.151 [2024-11-26 19:09:52.923953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467672 ] 00:03:19.151 [2024-11-26 19:09:52.989672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:19.411 [2024-11-26 19:09:53.020861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3467672 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3467672 00:03:19.411 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3467672 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3467672 ']' 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3467672 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3467672 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3467672' 00:03:19.670 killing process with pid 3467672 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3467672 00:03:19.670 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3467672 00:03:19.929 00:03:19.929 real 0m0.688s 00:03:19.929 user 0m0.656s 00:03:19.929 sys 0m0.352s 00:03:19.929 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:19.929 19:09:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:19.929 ************************************ 00:03:19.929 END TEST default_locks_via_rpc 00:03:19.929 ************************************ 00:03:19.929 19:09:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:03:19.929 19:09:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:19.929 19:09:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:19.929 19:09:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:19.929 ************************************ 00:03:19.929 START TEST non_locking_app_on_locked_coremask 00:03:19.929 ************************************ 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3467814 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3467814 /var/tmp/spdk.sock 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3467814 ']' 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:19.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:19.929 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:19.929 [2024-11-26 19:09:53.655047] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:19.929 [2024-11-26 19:09:53.655096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467814 ] 00:03:19.929 [2024-11-26 19:09:53.718967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:19.929 [2024-11-26 19:09:53.749496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3467824 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3467824 /var/tmp/spdk2.sock 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3467824 ']' 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:20.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:20.188 19:09:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:03:20.188 [2024-11-26 19:09:53.950159] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:20.188 [2024-11-26 19:09:53.950211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467824 ] 00:03:20.188 [2024-11-26 19:09:54.044549] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:03:20.188 [2024-11-26 19:09:54.044573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:20.447 [2024-11-26 19:09:54.107742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.015 19:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:21.015 19:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:03:21.015 19:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3467814 00:03:21.015 19:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3467814 00:03:21.015 19:09:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:21.274 lslocks: write error 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3467814 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3467814 ']' 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3467814 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3467814 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3467814' 00:03:21.274 killing process with pid 3467814 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3467814 00:03:21.274 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3467814 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3467824 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3467824 ']' 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3467824 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3467824 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3467824' 00:03:21.842 killing process with pid 3467824 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3467824 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3467824 00:03:21.842 00:03:21.842 real 0m2.041s 00:03:21.842 user 0m2.200s 00:03:21.842 sys 0m0.683s 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:21.842 19:09:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:21.842 ************************************ 00:03:21.842 END TEST non_locking_app_on_locked_coremask 00:03:21.842 ************************************ 00:03:21.842 19:09:55 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:03:21.842 19:09:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.842 19:09:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.842 19:09:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:21.842 ************************************ 00:03:21.842 START TEST locking_app_on_unlocked_coremask 00:03:21.842 ************************************ 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3468280 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3468280 /var/tmp/spdk.sock 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3468280 ']' 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:21.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:21.842 19:09:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:03:22.102 [2024-11-26 19:09:55.743973] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:22.102 [2024-11-26 19:09:55.744021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468280 ] 00:03:22.102 [2024-11-26 19:09:55.810371] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:03:22.102 [2024-11-26 19:09:55.810405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:22.102 [2024-11-26 19:09:55.842496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:22.362 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:22.362 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:03:22.362 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3468491 00:03:22.362 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3468491 /var/tmp/spdk2.sock 00:03:22.362 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3468491 ']' 00:03:22.362 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:22.363 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:22.363 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:22.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:22.363 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:22.363 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:22.363 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:03:22.363 [2024-11-26 19:09:56.048606] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:22.363 [2024-11-26 19:09:56.048660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468491 ] 00:03:22.363 [2024-11-26 19:09:56.147904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:22.363 [2024-11-26 19:09:56.206462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.322 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:23.322 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:03:23.322 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3468491 00:03:23.322 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3468491 00:03:23.322 19:09:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:23.322 lslocks: write error 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3468280 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3468280 ']' 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3468280 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468280 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468280' 00:03:23.322 killing process with pid 3468280 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3468280 00:03:23.322 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3468280 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3468491 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3468491 ']' 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3468491 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468491 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468491' 00:03:23.891 killing process with pid 3468491 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3468491 00:03:23.891 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3468491 00:03:24.151 00:03:24.151 real 0m2.052s 00:03:24.151 user 0m2.208s 00:03:24.151 sys 0m0.681s 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:24.151 ************************************ 00:03:24.151 END TEST locking_app_on_unlocked_coremask 00:03:24.151 ************************************ 00:03:24.151 19:09:57 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:03:24.151 19:09:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:24.151 19:09:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:24.151 19:09:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:24.151 ************************************ 00:03:24.151 START TEST locking_app_on_locked_coremask 00:03:24.151 ************************************ 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3468891 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3468891 /var/tmp/spdk.sock 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3468891 ']' 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:24.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:24.151 19:09:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:24.151 [2024-11-26 19:09:57.834056] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:24.151 [2024-11-26 19:09:57.834091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468891 ] 00:03:24.151 [2024-11-26 19:09:57.890167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:24.151 [2024-11-26 19:09:57.918344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3468894 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3468894 /var/tmp/spdk2.sock 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3468894 /var/tmp/spdk2.sock 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3468894 /var/tmp/spdk2.sock 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3468894 ']' 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:24.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:24.411 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:24.411 [2024-11-26 19:09:58.109579] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:24.411 [2024-11-26 19:09:58.109613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3468894 ] 00:03:24.411 [2024-11-26 19:09:58.198769] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3468891 has claimed it. 00:03:24.411 [2024-11-26 19:09:58.198806] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:03:24.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3468894) - No such process 00:03:24.980 ERROR: process (pid: 3468894) is no longer running 00:03:24.980 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:24.980 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:03:24.980 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:03:24.980 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:24.981 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:24.981 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:24.981 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3468891 00:03:24.981 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3468891 00:03:24.981 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:03:25.240 lslocks: write error 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3468891 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3468891 ']' 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3468891 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3468891 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3468891' 00:03:25.240 killing process with pid 3468891 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3468891 00:03:25.240 19:09:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3468891 00:03:25.500 00:03:25.500 real 0m1.308s 00:03:25.500 user 0m1.433s 00:03:25.500 sys 0m0.388s 00:03:25.500 19:09:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:25.500 19:09:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:25.500 ************************************ 00:03:25.500 END TEST locking_app_on_locked_coremask 00:03:25.500 ************************************ 00:03:25.500 19:09:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:03:25.500 19:09:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:25.500 19:09:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:25.500 19:09:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:25.500 ************************************ 00:03:25.500 START TEST locking_overlapped_coremask 00:03:25.500 ************************************ 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3469254 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3469254 /var/tmp/spdk.sock 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3469254 ']' 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:25.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:25.500 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:25.500 [2024-11-26 19:09:59.200043] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:25.500 [2024-11-26 19:09:59.200090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469254 ] 00:03:25.500 [2024-11-26 19:09:59.265387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:25.500 [2024-11-26 19:09:59.295328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:25.500 [2024-11-26 19:09:59.295531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:25.500 [2024-11-26 19:09:59.295532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3469266 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3469266 /var/tmp/spdk2.sock 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3469266 /var/tmp/spdk2.sock 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3469266 /var/tmp/spdk2.sock 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3469266 ']' 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:25.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:25.760 19:09:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:25.760 [2024-11-26 19:09:59.493794] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:25.760 [2024-11-26 19:09:59.493834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469266 ] 00:03:25.760 [2024-11-26 19:09:59.608197] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3469254 has claimed it. 00:03:25.760 [2024-11-26 19:09:59.608242] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:03:26.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3469266) - No such process 00:03:26.328 ERROR: process (pid: 3469266) is no longer running 00:03:26.328 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:26.328 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:03:26.328 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3469254 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3469254 ']' 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3469254 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3469254 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3469254' 00:03:26.329 killing process with pid 3469254 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3469254 00:03:26.329 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3469254 00:03:26.588 00:03:26.588 real 0m1.195s 00:03:26.588 user 0m3.321s 00:03:26.588 sys 0m0.303s 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:03:26.588 ************************************ 00:03:26.588 END TEST locking_overlapped_coremask 00:03:26.588 ************************************ 00:03:26.588 19:10:00 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:03:26.588 19:10:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:26.588 19:10:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:26.588 19:10:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:26.588 ************************************ 00:03:26.588 START TEST locking_overlapped_coremask_via_rpc 00:03:26.588 ************************************ 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3469529 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3469529 /var/tmp/spdk.sock 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3469529 ']' 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:26.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:03:26.588 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:26.588 [2024-11-26 19:10:00.446709] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:26.588 [2024-11-26 19:10:00.446759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469529 ] 00:03:26.848 [2024-11-26 19:10:00.513529] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:03:26.848 [2024-11-26 19:10:00.513561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:26.848 [2024-11-26 19:10:00.546954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:26.848 [2024-11-26 19:10:00.547107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:26.849 [2024-11-26 19:10:00.547119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:26.849 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3469630 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3469630 /var/tmp/spdk2.sock 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3469630 ']' 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:27.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.108 19:10:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:03:27.108 [2024-11-26 19:10:00.753036] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:27.108 [2024-11-26 19:10:00.753088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469630 ] 00:03:27.108 [2024-11-26 19:10:00.850728] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:03:27.108 [2024-11-26 19:10:00.850754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:27.108 [2024-11-26 19:10:00.909716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:27.108 [2024-11-26 19:10:00.913224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:27.108 [2024-11-26 19:10:00.913227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:27.679 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.939 [2024-11-26 19:10:01.546164] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3469529 has claimed it. 00:03:27.939 request: 00:03:27.939 { 00:03:27.939 "method": "framework_enable_cpumask_locks", 00:03:27.939 "req_id": 1 00:03:27.939 } 00:03:27.939 Got JSON-RPC error response 00:03:27.939 response: 00:03:27.939 { 00:03:27.939 "code": -32603, 00:03:27.939 "message": "Failed to claim CPU core: 2" 00:03:27.939 } 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3469529 /var/tmp/spdk.sock 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3469529 ']' 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3469630 /var/tmp/spdk2.sock 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3469630 ']' 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:03:27.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:27.939 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.198 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:28.198 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:28.198 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:03:28.198 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:03:28.198 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:03:28.199 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:03:28.199 00:03:28.199 real 0m1.475s 00:03:28.199 user 0m0.640s 00:03:28.199 sys 0m0.124s 00:03:28.199 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.199 19:10:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.199 ************************************ 00:03:28.199 END TEST locking_overlapped_coremask_via_rpc 00:03:28.199 ************************************ 00:03:28.199 19:10:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:03:28.199 19:10:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3469529 ]] 00:03:28.199 19:10:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3469529 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3469529 ']' 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3469529 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3469529 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3469529' 00:03:28.199 killing process with pid 3469529 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3469529 00:03:28.199 19:10:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3469529 00:03:28.458 19:10:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3469630 ]] 00:03:28.458 19:10:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3469630 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3469630 ']' 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3469630 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3469630 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3469630' 00:03:28.458 killing process with pid 3469630 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3469630 00:03:28.458 19:10:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3469630 00:03:28.722 19:10:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:03:28.722 19:10:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:03:28.722 19:10:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3469529 ]] 00:03:28.722 19:10:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3469529 00:03:28.722 19:10:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3469529 ']' 00:03:28.722 19:10:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3469529 00:03:28.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3469529) - No such process 00:03:28.722 19:10:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3469529 is not found' 00:03:28.722 Process with pid 3469529 is not found 00:03:28.722 19:10:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3469630 ]] 00:03:28.722 19:10:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3469630 00:03:28.722 19:10:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3469630 ']' 00:03:28.722 19:10:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3469630 00:03:28.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3469630) - No such process 00:03:28.722 19:10:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3469630 is not found' 00:03:28.722 Process with pid 3469630 is not found 00:03:28.722 19:10:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:03:28.722 00:03:28.722 real 0m10.366s 00:03:28.722 user 0m19.035s 00:03:28.722 sys 0m3.578s 00:03:28.722 19:10:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.722 19:10:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:03:28.722 ************************************ 00:03:28.722 END TEST cpu_locks 00:03:28.722 ************************************ 00:03:28.722 00:03:28.722 real 0m31.795s 00:03:28.722 user 1m2.632s 00:03:28.722 sys 0m6.235s 00:03:28.722 19:10:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.722 19:10:02 event -- common/autotest_common.sh@10 -- # set +x 00:03:28.722 ************************************ 00:03:28.722 END TEST event 00:03:28.722 ************************************ 00:03:28.722 19:10:02 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:03:28.722 19:10:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.722 19:10:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.722 19:10:02 -- common/autotest_common.sh@10 -- # set +x 00:03:28.722 ************************************ 00:03:28.722 START TEST thread 00:03:28.722 ************************************ 00:03:28.722 19:10:02 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:03:28.722 * Looking for test storage... 00:03:28.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:03:28.722 19:10:02 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:28.722 19:10:02 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:03:28.722 19:10:02 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:29.014 19:10:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.014 19:10:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.014 19:10:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.014 19:10:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.014 19:10:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.014 19:10:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.014 19:10:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.014 19:10:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.014 19:10:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.014 19:10:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.014 19:10:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.014 19:10:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:03:29.014 19:10:02 thread -- scripts/common.sh@345 -- # : 1 00:03:29.014 19:10:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.014 19:10:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.014 19:10:02 thread -- scripts/common.sh@365 -- # decimal 1 00:03:29.014 19:10:02 thread -- scripts/common.sh@353 -- # local d=1 00:03:29.014 19:10:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.014 19:10:02 thread -- scripts/common.sh@355 -- # echo 1 00:03:29.014 19:10:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.014 19:10:02 thread -- scripts/common.sh@366 -- # decimal 2 00:03:29.014 19:10:02 thread -- scripts/common.sh@353 -- # local d=2 00:03:29.014 19:10:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.014 19:10:02 thread -- scripts/common.sh@355 -- # echo 2 00:03:29.014 19:10:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.014 19:10:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.014 19:10:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.014 19:10:02 thread -- scripts/common.sh@368 -- # return 0 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:29.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.014 --rc genhtml_branch_coverage=1 00:03:29.014 --rc genhtml_function_coverage=1 00:03:29.014 --rc genhtml_legend=1 00:03:29.014 --rc geninfo_all_blocks=1 00:03:29.014 --rc geninfo_unexecuted_blocks=1 00:03:29.014 00:03:29.014 ' 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:29.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.014 --rc genhtml_branch_coverage=1 00:03:29.014 --rc genhtml_function_coverage=1 00:03:29.014 --rc genhtml_legend=1 00:03:29.014 --rc geninfo_all_blocks=1 00:03:29.014 --rc geninfo_unexecuted_blocks=1 00:03:29.014 00:03:29.014 ' 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:29.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.014 --rc genhtml_branch_coverage=1 00:03:29.014 --rc genhtml_function_coverage=1 00:03:29.014 --rc genhtml_legend=1 00:03:29.014 --rc geninfo_all_blocks=1 00:03:29.014 --rc geninfo_unexecuted_blocks=1 00:03:29.014 00:03:29.014 ' 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:29.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.014 --rc genhtml_branch_coverage=1 00:03:29.014 --rc genhtml_function_coverage=1 00:03:29.014 --rc genhtml_legend=1 00:03:29.014 --rc geninfo_all_blocks=1 00:03:29.014 --rc geninfo_unexecuted_blocks=1 00:03:29.014 00:03:29.014 ' 00:03:29.014 19:10:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:29.014 19:10:02 thread -- common/autotest_common.sh@10 -- # set +x 00:03:29.014 ************************************ 00:03:29.014 START TEST thread_poller_perf 00:03:29.014 ************************************ 00:03:29.014 19:10:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:03:29.014 [2024-11-26 19:10:02.626451] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:29.014 [2024-11-26 19:10:02.626501] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470076 ] 00:03:29.014 [2024-11-26 19:10:02.694997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:29.014 [2024-11-26 19:10:02.730612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.014 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:03:30.011 [2024-11-26T18:10:03.876Z] ====================================== 00:03:30.011 [2024-11-26T18:10:03.876Z] busy:2407757000 (cyc) 00:03:30.011 [2024-11-26T18:10:03.876Z] total_run_count: 413000 00:03:30.011 [2024-11-26T18:10:03.876Z] tsc_hz: 2400000000 (cyc) 00:03:30.011 [2024-11-26T18:10:03.876Z] ====================================== 00:03:30.011 [2024-11-26T18:10:03.876Z] poller_cost: 5829 (cyc), 2428 (nsec) 00:03:30.011 00:03:30.011 real 0m1.146s 00:03:30.011 user 0m1.079s 00:03:30.011 sys 0m0.062s 00:03:30.011 19:10:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:30.011 19:10:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:03:30.011 ************************************ 00:03:30.011 END TEST thread_poller_perf 00:03:30.011 ************************************ 00:03:30.011 19:10:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:03:30.011 19:10:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:03:30.011 19:10:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:30.011 19:10:03 thread -- common/autotest_common.sh@10 -- # set +x 00:03:30.011 ************************************ 00:03:30.011 START TEST thread_poller_perf 00:03:30.011 ************************************ 00:03:30.011 19:10:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:03:30.011 [2024-11-26 19:10:03.818264] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:30.011 [2024-11-26 19:10:03.818312] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470429 ] 00:03:30.269 [2024-11-26 19:10:03.883432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.269 [2024-11-26 19:10:03.912549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.269 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:03:31.207 [2024-11-26T18:10:05.072Z] ====================================== 00:03:31.207 [2024-11-26T18:10:05.072Z] busy:2401251784 (cyc) 00:03:31.207 [2024-11-26T18:10:05.072Z] total_run_count: 5565000 00:03:31.207 [2024-11-26T18:10:05.072Z] tsc_hz: 2400000000 (cyc) 00:03:31.207 [2024-11-26T18:10:05.072Z] ====================================== 00:03:31.207 [2024-11-26T18:10:05.072Z] poller_cost: 431 (cyc), 179 (nsec) 00:03:31.207 00:03:31.207 real 0m1.130s 00:03:31.207 user 0m1.076s 00:03:31.207 sys 0m0.051s 00:03:31.207 19:10:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.207 19:10:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:03:31.207 ************************************ 00:03:31.207 END TEST thread_poller_perf 00:03:31.207 ************************************ 00:03:31.207 19:10:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:03:31.207 00:03:31.207 real 0m2.495s 00:03:31.207 user 0m2.264s 00:03:31.207 sys 0m0.234s 00:03:31.207 19:10:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:31.207 19:10:04 thread -- common/autotest_common.sh@10 -- # set +x 00:03:31.207 ************************************ 00:03:31.207 END TEST thread 00:03:31.207 ************************************ 00:03:31.207 19:10:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:03:31.207 19:10:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:03:31.207 19:10:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:31.207 19:10:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:31.207 19:10:04 -- common/autotest_common.sh@10 -- # set +x 00:03:31.207 ************************************ 00:03:31.207 START TEST app_cmdline 00:03:31.207 ************************************ 00:03:31.207 19:10:05 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:03:31.207 * Looking for test storage... 00:03:31.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:03:31.207 19:10:05 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:31.207 19:10:05 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:03:31.207 19:10:05 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:31.495 19:10:05 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.495 19:10:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:03:31.495 19:10:05 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.495 19:10:05 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:31.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.495 --rc genhtml_branch_coverage=1 00:03:31.495 --rc genhtml_function_coverage=1 00:03:31.495 --rc genhtml_legend=1 00:03:31.495 --rc geninfo_all_blocks=1 00:03:31.495 --rc geninfo_unexecuted_blocks=1 00:03:31.495 00:03:31.495 ' 00:03:31.495 19:10:05 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:31.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.495 --rc genhtml_branch_coverage=1 00:03:31.495 --rc genhtml_function_coverage=1 00:03:31.495 --rc genhtml_legend=1 00:03:31.495 --rc geninfo_all_blocks=1 00:03:31.495 --rc geninfo_unexecuted_blocks=1 00:03:31.495 00:03:31.495 ' 00:03:31.495 19:10:05 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:31.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.495 --rc genhtml_branch_coverage=1 00:03:31.495 --rc genhtml_function_coverage=1 00:03:31.495 --rc genhtml_legend=1 00:03:31.495 --rc geninfo_all_blocks=1 00:03:31.495 --rc geninfo_unexecuted_blocks=1 00:03:31.495 00:03:31.495 ' 00:03:31.495 19:10:05 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:31.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.495 --rc genhtml_branch_coverage=1 00:03:31.495 --rc genhtml_function_coverage=1 00:03:31.495 --rc genhtml_legend=1 00:03:31.495 --rc geninfo_all_blocks=1 00:03:31.495 --rc geninfo_unexecuted_blocks=1 00:03:31.495 00:03:31.496 ' 00:03:31.496 19:10:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:03:31.496 19:10:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3470832 00:03:31.496 19:10:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3470832 00:03:31.496 19:10:05 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:03:31.496 19:10:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3470832 ']' 00:03:31.496 19:10:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:31.496 19:10:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:31.496 19:10:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:31.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:31.496 19:10:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:31.496 19:10:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:03:31.496 [2024-11-26 19:10:05.157551] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:31.496 [2024-11-26 19:10:05.157597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3470832 ] 00:03:31.496 [2024-11-26 19:10:05.216005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:31.496 [2024-11-26 19:10:05.246948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:03:31.754 { 00:03:31.754 "version": "SPDK v25.01-pre git sha1 c6092c872", 00:03:31.754 "fields": { 00:03:31.754 "major": 25, 00:03:31.754 "minor": 1, 00:03:31.754 "patch": 0, 00:03:31.754 "suffix": "-pre", 00:03:31.754 "commit": "c6092c872" 00:03:31.754 } 00:03:31.754 } 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:03:31.754 19:10:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:31.754 19:10:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:03:31.755 19:10:05 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:03:32.013 request: 00:03:32.013 { 00:03:32.013 "method": "env_dpdk_get_mem_stats", 00:03:32.013 "req_id": 1 00:03:32.013 } 00:03:32.013 Got JSON-RPC error response 00:03:32.013 response: 00:03:32.013 { 00:03:32.013 "code": -32601, 00:03:32.013 "message": "Method not found" 00:03:32.013 } 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:32.013 19:10:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3470832 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3470832 ']' 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3470832 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3470832 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3470832' 00:03:32.013 killing process with pid 3470832 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 3470832 00:03:32.013 19:10:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 3470832 00:03:32.271 00:03:32.271 real 0m0.985s 00:03:32.271 user 0m1.189s 00:03:32.271 sys 0m0.325s 00:03:32.272 19:10:05 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.272 19:10:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:03:32.272 ************************************ 00:03:32.272 END TEST app_cmdline 00:03:32.272 ************************************ 00:03:32.272 19:10:06 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:03:32.272 19:10:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:32.272 19:10:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.272 19:10:06 -- common/autotest_common.sh@10 -- # set +x 00:03:32.272 ************************************ 00:03:32.272 START TEST version 00:03:32.272 ************************************ 00:03:32.272 19:10:06 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:03:32.272 * Looking for test storage... 00:03:32.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:03:32.272 19:10:06 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:32.272 19:10:06 version -- common/autotest_common.sh@1693 -- # lcov --version 00:03:32.272 19:10:06 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:32.531 19:10:06 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:32.531 19:10:06 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.531 19:10:06 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.531 19:10:06 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.531 19:10:06 version -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.531 19:10:06 version -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.531 19:10:06 version -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.531 19:10:06 version -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.531 19:10:06 version -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.531 19:10:06 version -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.531 19:10:06 version -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.531 19:10:06 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.531 19:10:06 version -- scripts/common.sh@344 -- # case "$op" in 00:03:32.531 19:10:06 version -- scripts/common.sh@345 -- # : 1 00:03:32.531 19:10:06 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.531 19:10:06 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.531 19:10:06 version -- scripts/common.sh@365 -- # decimal 1 00:03:32.531 19:10:06 version -- scripts/common.sh@353 -- # local d=1 00:03:32.531 19:10:06 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.531 19:10:06 version -- scripts/common.sh@355 -- # echo 1 00:03:32.531 19:10:06 version -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.531 19:10:06 version -- scripts/common.sh@366 -- # decimal 2 00:03:32.531 19:10:06 version -- scripts/common.sh@353 -- # local d=2 00:03:32.531 19:10:06 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.531 19:10:06 version -- scripts/common.sh@355 -- # echo 2 00:03:32.531 19:10:06 version -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.531 19:10:06 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.531 19:10:06 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.531 19:10:06 version -- scripts/common.sh@368 -- # return 0 00:03:32.531 19:10:06 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.531 19:10:06 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:32.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.531 --rc genhtml_branch_coverage=1 00:03:32.531 --rc genhtml_function_coverage=1 00:03:32.531 --rc genhtml_legend=1 00:03:32.531 --rc geninfo_all_blocks=1 00:03:32.531 --rc geninfo_unexecuted_blocks=1 00:03:32.531 00:03:32.531 ' 00:03:32.531 19:10:06 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:32.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.531 --rc genhtml_branch_coverage=1 00:03:32.531 --rc genhtml_function_coverage=1 00:03:32.531 --rc genhtml_legend=1 00:03:32.531 --rc geninfo_all_blocks=1 00:03:32.531 --rc geninfo_unexecuted_blocks=1 00:03:32.531 00:03:32.531 ' 00:03:32.531 19:10:06 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:32.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.531 --rc genhtml_branch_coverage=1 00:03:32.531 --rc genhtml_function_coverage=1 00:03:32.531 --rc genhtml_legend=1 00:03:32.531 --rc geninfo_all_blocks=1 00:03:32.531 --rc geninfo_unexecuted_blocks=1 00:03:32.531 00:03:32.531 ' 00:03:32.531 19:10:06 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:32.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.531 --rc genhtml_branch_coverage=1 00:03:32.531 --rc genhtml_function_coverage=1 00:03:32.531 --rc genhtml_legend=1 00:03:32.531 --rc geninfo_all_blocks=1 00:03:32.531 --rc geninfo_unexecuted_blocks=1 00:03:32.531 00:03:32.531 ' 00:03:32.531 19:10:06 version -- app/version.sh@17 -- # get_header_version major 00:03:32.531 19:10:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:03:32.531 19:10:06 version -- app/version.sh@14 -- # cut -f2 00:03:32.531 19:10:06 version -- app/version.sh@14 -- # tr -d '"' 00:03:32.531 19:10:06 version -- app/version.sh@17 -- # major=25 00:03:32.531 19:10:06 version -- app/version.sh@18 -- # get_header_version minor 00:03:32.532 19:10:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:03:32.532 19:10:06 version -- app/version.sh@14 -- # cut -f2 00:03:32.532 19:10:06 version -- app/version.sh@14 -- # tr -d '"' 00:03:32.532 19:10:06 version -- app/version.sh@18 -- # minor=1 00:03:32.532 19:10:06 version -- app/version.sh@19 -- # get_header_version patch 00:03:32.532 19:10:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:03:32.532 19:10:06 version -- app/version.sh@14 -- # cut -f2 00:03:32.532 19:10:06 version -- app/version.sh@14 -- # tr -d '"' 00:03:32.532 19:10:06 version -- app/version.sh@19 -- # patch=0 00:03:32.532 19:10:06 version -- app/version.sh@20 -- # get_header_version suffix 00:03:32.532 19:10:06 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:03:32.532 19:10:06 version -- app/version.sh@14 -- # cut -f2 00:03:32.532 19:10:06 version -- app/version.sh@14 -- # tr -d '"' 00:03:32.532 19:10:06 version -- app/version.sh@20 -- # suffix=-pre 00:03:32.532 19:10:06 version -- app/version.sh@22 -- # version=25.1 00:03:32.532 19:10:06 version -- app/version.sh@25 -- # (( patch != 0 )) 00:03:32.532 19:10:06 version -- app/version.sh@28 -- # version=25.1rc0 00:03:32.532 19:10:06 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:03:32.532 19:10:06 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:03:32.532 19:10:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:03:32.532 19:10:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:03:32.532 00:03:32.532 real 0m0.172s 00:03:32.532 user 0m0.106s 00:03:32.532 sys 0m0.093s 00:03:32.532 19:10:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:32.532 19:10:06 version -- common/autotest_common.sh@10 -- # set +x 00:03:32.532 ************************************ 00:03:32.532 END TEST version 00:03:32.532 ************************************ 00:03:32.532 19:10:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:03:32.532 19:10:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:03:32.532 19:10:06 -- spdk/autotest.sh@194 -- # uname -s 00:03:32.532 19:10:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:03:32.532 19:10:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:03:32.532 19:10:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:03:32.532 19:10:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:03:32.532 19:10:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:03:32.532 19:10:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:03:32.532 19:10:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:32.532 19:10:06 -- common/autotest_common.sh@10 -- # set +x 00:03:32.532 19:10:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:03:32.532 19:10:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:03:32.532 19:10:06 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:03:32.532 19:10:06 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:03:32.532 19:10:06 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:03:32.532 19:10:06 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:03:32.532 19:10:06 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:03:32.532 19:10:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:03:32.532 19:10:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.532 19:10:06 -- common/autotest_common.sh@10 -- # set +x 00:03:32.532 ************************************ 00:03:32.532 START TEST nvmf_tcp 00:03:32.532 ************************************ 00:03:32.532 19:10:06 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:03:32.532 * Looking for test storage... 00:03:32.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:03:32.532 19:10:06 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:32.532 19:10:06 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:03:32.532 19:10:06 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:32.791 19:10:06 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.791 19:10:06 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.792 19:10:06 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:03:32.792 19:10:06 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.792 19:10:06 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:32.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.792 --rc genhtml_branch_coverage=1 00:03:32.792 --rc genhtml_function_coverage=1 00:03:32.792 --rc genhtml_legend=1 00:03:32.792 --rc geninfo_all_blocks=1 00:03:32.792 --rc geninfo_unexecuted_blocks=1 00:03:32.792 00:03:32.792 ' 00:03:32.792 19:10:06 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:32.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.792 --rc genhtml_branch_coverage=1 00:03:32.792 --rc genhtml_function_coverage=1 00:03:32.792 --rc genhtml_legend=1 00:03:32.792 --rc geninfo_all_blocks=1 00:03:32.792 --rc geninfo_unexecuted_blocks=1 00:03:32.792 00:03:32.792 ' 00:03:32.792 19:10:06 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:32.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.792 --rc genhtml_branch_coverage=1 00:03:32.792 --rc genhtml_function_coverage=1 00:03:32.792 --rc genhtml_legend=1 00:03:32.792 --rc geninfo_all_blocks=1 00:03:32.792 --rc geninfo_unexecuted_blocks=1 00:03:32.792 00:03:32.792 ' 00:03:32.792 19:10:06 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:32.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.792 --rc genhtml_branch_coverage=1 00:03:32.792 --rc genhtml_function_coverage=1 00:03:32.792 --rc genhtml_legend=1 00:03:32.792 --rc geninfo_all_blocks=1 00:03:32.792 --rc geninfo_unexecuted_blocks=1 00:03:32.792 00:03:32.792 ' 00:03:32.792 19:10:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:03:32.792 19:10:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:03:32.792 19:10:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:03:32.792 19:10:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:03:32.792 19:10:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.792 19:10:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:32.792 ************************************ 00:03:32.792 START TEST nvmf_target_core 00:03:32.792 ************************************ 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:03:32.792 * Looking for test storage... 00:03:32.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:32.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.792 --rc genhtml_branch_coverage=1 00:03:32.792 --rc genhtml_function_coverage=1 00:03:32.792 --rc genhtml_legend=1 00:03:32.792 --rc geninfo_all_blocks=1 00:03:32.792 --rc geninfo_unexecuted_blocks=1 00:03:32.792 00:03:32.792 ' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:32.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.792 --rc genhtml_branch_coverage=1 00:03:32.792 --rc genhtml_function_coverage=1 00:03:32.792 --rc genhtml_legend=1 00:03:32.792 --rc geninfo_all_blocks=1 00:03:32.792 --rc geninfo_unexecuted_blocks=1 00:03:32.792 00:03:32.792 ' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:32.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.792 --rc genhtml_branch_coverage=1 00:03:32.792 --rc genhtml_function_coverage=1 00:03:32.792 --rc genhtml_legend=1 00:03:32.792 --rc geninfo_all_blocks=1 00:03:32.792 --rc geninfo_unexecuted_blocks=1 00:03:32.792 00:03:32.792 ' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:32.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:32.792 --rc genhtml_branch_coverage=1 00:03:32.792 --rc genhtml_function_coverage=1 00:03:32.792 --rc genhtml_legend=1 00:03:32.792 --rc geninfo_all_blocks=1 00:03:32.792 --rc geninfo_unexecuted_blocks=1 00:03:32.792 00:03:32.792 ' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:32.792 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:32.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:03:32.793 ************************************ 00:03:32.793 START TEST nvmf_abort 00:03:32.793 ************************************ 00:03:32.793 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:03:33.102 * Looking for test storage... 00:03:33.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.102 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.103 --rc genhtml_branch_coverage=1 00:03:33.103 --rc genhtml_function_coverage=1 00:03:33.103 --rc genhtml_legend=1 00:03:33.103 --rc geninfo_all_blocks=1 00:03:33.103 --rc geninfo_unexecuted_blocks=1 00:03:33.103 00:03:33.103 ' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.103 --rc genhtml_branch_coverage=1 00:03:33.103 --rc genhtml_function_coverage=1 00:03:33.103 --rc genhtml_legend=1 00:03:33.103 --rc geninfo_all_blocks=1 00:03:33.103 --rc geninfo_unexecuted_blocks=1 00:03:33.103 00:03:33.103 ' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.103 --rc genhtml_branch_coverage=1 00:03:33.103 --rc genhtml_function_coverage=1 00:03:33.103 --rc genhtml_legend=1 00:03:33.103 --rc geninfo_all_blocks=1 00:03:33.103 --rc geninfo_unexecuted_blocks=1 00:03:33.103 00:03:33.103 ' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.103 --rc genhtml_branch_coverage=1 00:03:33.103 --rc genhtml_function_coverage=1 00:03:33.103 --rc genhtml_legend=1 00:03:33.103 --rc geninfo_all_blocks=1 00:03:33.103 --rc geninfo_unexecuted_blocks=1 00:03:33.103 00:03:33.103 ' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:33.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:03:33.103 19:10:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:03:38.375 Found 0000:31:00.0 (0x8086 - 0x159b) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:03:38.375 Found 0000:31:00.1 (0x8086 - 0x159b) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:03:38.375 Found net devices under 0000:31:00.0: cvl_0_0 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:03:38.375 Found net devices under 0000:31:00.1: cvl_0_1 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:03:38.375 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:03:38.376 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:03:38.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:03:38.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.501 ms 00:03:38.635 00:03:38.635 --- 10.0.0.2 ping statistics --- 00:03:38.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:03:38.635 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:03:38.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:03:38.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:03:38.635 00:03:38.635 --- 10.0.0.1 ping statistics --- 00:03:38.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:03:38.635 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3475011 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3475011 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3475011 ']' 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:38.635 19:10:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:03:38.635 [2024-11-26 19:10:12.409467] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:38.635 [2024-11-26 19:10:12.409534] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:03:38.895 [2024-11-26 19:10:12.503318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:38.895 [2024-11-26 19:10:12.556350] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:03:38.895 [2024-11-26 19:10:12.556403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:03:38.895 [2024-11-26 19:10:12.556411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:38.895 [2024-11-26 19:10:12.556418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:38.895 [2024-11-26 19:10:12.556424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:03:38.895 [2024-11-26 19:10:12.558373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:38.895 [2024-11-26 19:10:12.558599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:38.895 [2024-11-26 19:10:12.558600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:39.465 [2024-11-26 19:10:13.254788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:39.465 Malloc0 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:39.465 Delay0 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.465 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:39.465 [2024-11-26 19:10:13.326610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:03:39.724 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.724 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:03:39.724 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:39.724 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:39.724 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:39.724 19:10:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:03:39.724 [2024-11-26 19:10:13.392672] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:03:42.262 Initializing NVMe Controllers 00:03:42.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:03:42.262 controller IO queue size 128 less than required 00:03:42.262 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:03:42.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:03:42.262 Initialization complete. Launching workers. 00:03:42.262 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42944 00:03:42.262 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 43005, failed to submit 62 00:03:42.262 success 42948, unsuccessful 57, failed 0 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:03:42.262 rmmod nvme_tcp 00:03:42.262 rmmod nvme_fabrics 00:03:42.262 rmmod nvme_keyring 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3475011 ']' 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3475011 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3475011 ']' 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3475011 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3475011 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3475011' 00:03:42.262 killing process with pid 3475011 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3475011 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3475011 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:03:42.262 19:10:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:03:44.169 00:03:44.169 real 0m11.182s 00:03:44.169 user 0m12.778s 00:03:44.169 sys 0m5.173s 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:03:44.169 ************************************ 00:03:44.169 END TEST nvmf_abort 00:03:44.169 ************************************ 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:03:44.169 ************************************ 00:03:44.169 START TEST nvmf_ns_hotplug_stress 00:03:44.169 ************************************ 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:03:44.169 * Looking for test storage... 00:03:44.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:03:44.169 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:44.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.170 --rc genhtml_branch_coverage=1 00:03:44.170 --rc genhtml_function_coverage=1 00:03:44.170 --rc genhtml_legend=1 00:03:44.170 --rc geninfo_all_blocks=1 00:03:44.170 --rc geninfo_unexecuted_blocks=1 00:03:44.170 00:03:44.170 ' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:44.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.170 --rc genhtml_branch_coverage=1 00:03:44.170 --rc genhtml_function_coverage=1 00:03:44.170 --rc genhtml_legend=1 00:03:44.170 --rc geninfo_all_blocks=1 00:03:44.170 --rc geninfo_unexecuted_blocks=1 00:03:44.170 00:03:44.170 ' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:44.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.170 --rc genhtml_branch_coverage=1 00:03:44.170 --rc genhtml_function_coverage=1 00:03:44.170 --rc genhtml_legend=1 00:03:44.170 --rc geninfo_all_blocks=1 00:03:44.170 --rc geninfo_unexecuted_blocks=1 00:03:44.170 00:03:44.170 ' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:44.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.170 --rc genhtml_branch_coverage=1 00:03:44.170 --rc genhtml_function_coverage=1 00:03:44.170 --rc genhtml_legend=1 00:03:44.170 --rc geninfo_all_blocks=1 00:03:44.170 --rc geninfo_unexecuted_blocks=1 00:03:44.170 00:03:44.170 ' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:44.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:03:44.170 19:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:03:50.739 Found 0000:31:00.0 (0x8086 - 0x159b) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:03:50.739 Found 0000:31:00.1 (0x8086 - 0x159b) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:03:50.739 Found net devices under 0000:31:00.0: cvl_0_0 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:03:50.739 Found net devices under 0000:31:00.1: cvl_0_1 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:03:50.739 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:03:50.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:03:50.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:03:50.740 00:03:50.740 --- 10.0.0.2 ping statistics --- 00:03:50.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:03:50.740 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:03:50.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:03:50.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:03:50.740 00:03:50.740 --- 10.0.0.1 ping statistics --- 00:03:50.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:03:50.740 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3480355 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3480355 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3480355 ']' 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:03:50.740 [2024-11-26 19:10:23.662408] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:03:50.740 [2024-11-26 19:10:23.662478] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:03:50.740 [2024-11-26 19:10:23.739692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:03:50.740 [2024-11-26 19:10:23.786083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:03:50.740 [2024-11-26 19:10:23.786139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:03:50.740 [2024-11-26 19:10:23.786146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:50.740 [2024-11-26 19:10:23.786151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:50.740 [2024-11-26 19:10:23.786156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:03:50.740 [2024-11-26 19:10:23.791129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:50.740 [2024-11-26 19:10:23.791426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:50.740 [2024-11-26 19:10:23.791427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:03:50.740 19:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:03:50.740 [2024-11-26 19:10:24.082212] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:50.740 19:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:03:50.740 19:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:03:50.740 [2024-11-26 19:10:24.436798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:03:50.740 19:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:03:50.999 19:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:03:50.999 Malloc0 00:03:50.999 19:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:03:51.259 Delay0 00:03:51.259 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:51.518 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:03:51.518 NULL1 00:03:51.518 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:03:51.777 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3480739 00:03:51.777 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:03:51.777 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:51.777 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:52.035 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:52.035 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:03:52.035 19:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:03:52.293 true 00:03:52.293 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:52.293 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:52.551 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:52.551 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:03:52.551 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:03:52.810 true 00:03:52.810 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:52.810 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:53.069 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:53.069 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:03:53.069 19:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:03:53.328 true 00:03:53.328 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:53.328 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:53.328 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:53.587 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:03:53.587 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:03:53.845 true 00:03:53.845 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:53.845 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:53.845 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:54.103 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:03:54.103 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:03:54.362 true 00:03:54.362 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:54.362 19:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:54.362 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:54.621 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:03:54.621 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:03:54.621 true 00:03:54.621 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:54.621 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:54.879 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:55.137 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:03:55.137 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:03:55.137 true 00:03:55.137 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:55.137 19:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:55.398 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:55.663 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:03:55.663 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:03:55.663 true 00:03:55.663 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:55.663 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:55.922 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:55.922 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:03:55.922 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:03:56.181 true 00:03:56.181 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:56.181 19:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:56.439 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:56.439 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:03:56.439 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:03:56.724 true 00:03:56.724 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:56.724 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:56.724 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:56.982 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:03:56.982 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:03:57.241 true 00:03:57.241 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:57.241 19:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:57.241 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:57.500 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:03:57.500 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:03:57.758 true 00:03:57.758 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:57.758 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:57.758 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:58.018 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:03:58.018 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:03:58.018 true 00:03:58.277 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:58.277 19:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:58.277 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:58.535 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:03:58.535 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:03:58.535 true 00:03:58.535 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:58.535 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:58.793 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:59.051 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:03:59.051 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:03:59.051 true 00:03:59.052 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:59.052 19:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:59.310 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:03:59.569 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:03:59.569 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:03:59.569 true 00:03:59.569 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:03:59.569 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:03:59.829 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:00.089 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:04:00.089 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:04:00.089 true 00:04:00.089 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:00.089 19:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:00.348 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:00.348 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:04:00.348 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:04:00.606 true 00:04:00.606 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:00.606 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:00.865 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:00.865 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:04:00.865 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:04:01.125 true 00:04:01.125 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:01.125 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:01.384 19:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:01.384 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:04:01.384 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:04:01.643 true 00:04:01.643 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:01.643 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:01.643 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:01.903 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:04:01.903 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:04:02.163 true 00:04:02.163 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:02.163 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:02.163 19:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:02.422 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:04:02.422 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:04:02.422 true 00:04:02.681 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:02.681 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:02.681 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:02.941 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:04:02.941 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:04:02.941 true 00:04:02.941 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:02.941 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:03.200 19:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:03.459 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:04:03.459 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:04:03.459 true 00:04:03.459 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:03.459 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:03.719 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:03.979 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:04:03.979 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:04:03.979 true 00:04:03.979 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:03.979 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:04.239 19:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:04.239 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:04:04.239 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:04:04.499 true 00:04:04.499 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:04.499 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:04.759 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:04.759 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:04:04.759 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:04:05.019 true 00:04:05.019 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:05.019 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:05.277 19:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:05.277 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:04:05.277 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:04:05.534 true 00:04:05.534 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:05.534 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:05.534 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:05.792 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:04:05.792 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:04:06.051 true 00:04:06.051 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:06.051 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:06.051 19:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:06.309 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:04:06.309 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:04:06.309 true 00:04:06.569 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:06.569 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:06.569 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:06.827 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:04:06.827 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:04:06.827 true 00:04:06.827 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:06.827 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:07.086 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:07.345 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:04:07.345 19:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:04:07.345 true 00:04:07.345 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:07.345 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:07.605 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:07.605 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:04:07.605 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:04:07.863 true 00:04:07.863 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:07.863 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:08.122 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:08.122 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:04:08.122 19:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:04:08.381 true 00:04:08.381 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:08.381 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:08.641 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:08.641 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:04:08.641 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:04:08.901 true 00:04:08.901 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:08.901 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:08.901 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:09.160 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:04:09.160 19:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:04:09.420 true 00:04:09.420 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:09.420 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:09.420 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:09.679 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:04:09.679 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:04:09.937 true 00:04:09.937 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:09.937 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:09.937 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:10.197 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:04:10.197 19:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:04:10.197 true 00:04:10.197 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:10.197 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:10.455 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:10.714 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:04:10.714 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:04:10.714 true 00:04:10.714 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:10.714 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:10.972 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:11.232 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:04:11.232 19:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:04:11.232 true 00:04:11.232 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:11.232 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:11.492 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:11.492 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:04:11.492 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:04:11.750 true 00:04:11.750 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:11.750 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:12.009 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:12.009 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:04:12.009 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:04:12.268 true 00:04:12.268 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:12.268 19:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:12.526 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:12.526 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:04:12.526 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:04:12.784 true 00:04:12.784 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:12.784 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:13.042 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:13.042 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:04:13.042 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:04:13.300 true 00:04:13.300 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:13.300 19:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:13.300 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:13.706 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:04:13.706 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:04:13.706 true 00:04:13.706 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:13.706 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:13.969 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:13.969 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:04:13.969 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:04:14.228 true 00:04:14.228 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:14.228 19:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:14.488 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:14.488 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:04:14.488 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:04:14.746 true 00:04:14.746 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:14.746 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:14.746 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:15.004 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:04:15.004 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:04:15.264 true 00:04:15.264 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:15.264 19:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:15.264 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:15.523 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:04:15.523 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:04:15.523 true 00:04:15.781 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:15.781 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:15.781 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:16.040 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:04:16.040 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:04:16.040 true 00:04:16.040 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:16.040 19:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:16.299 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:16.558 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:04:16.558 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:04:16.558 true 00:04:16.558 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:16.558 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:16.817 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:17.075 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:04:17.075 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:04:17.075 true 00:04:17.075 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:17.075 19:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:17.333 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:17.333 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:04:17.333 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:04:17.591 true 00:04:17.591 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:17.592 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:17.850 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:17.850 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:04:17.850 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:04:18.110 true 00:04:18.110 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:18.110 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:18.369 19:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:18.369 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:04:18.369 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:04:18.628 true 00:04:18.628 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:18.628 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:18.628 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:18.886 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:04:18.886 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:04:19.144 true 00:04:19.144 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:19.144 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:19.144 19:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:19.403 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:04:19.403 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:04:19.661 true 00:04:19.661 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:19.661 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:19.661 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:19.921 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:04:19.921 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:04:19.921 true 00:04:19.921 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:19.921 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:20.179 19:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:20.438 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:04:20.438 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:04:20.438 true 00:04:20.438 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:20.438 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:20.697 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:20.955 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:04:20.955 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:04:20.955 true 00:04:20.955 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:20.955 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:21.213 19:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:21.472 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:04:21.472 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:04:21.472 true 00:04:21.472 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:21.472 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:21.730 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:21.730 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:04:21.730 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:04:21.987 true 00:04:21.987 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:21.987 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:21.987 Initializing NVMe Controllers 00:04:21.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:04:21.987 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:04:21.987 Controller IO queue size 128, less than required. 00:04:21.987 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:04:21.987 WARNING: Some requested NVMe devices were skipped 00:04:21.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:04:21.987 Initialization complete. Launching workers. 00:04:21.987 ======================================================== 00:04:21.987 Latency(us) 00:04:21.987 Device Information : IOPS MiB/s Average min max 00:04:21.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30430.74 14.86 4206.33 1732.48 43513.08 00:04:21.987 ======================================================== 00:04:21.987 Total : 30430.74 14.86 4206.33 1732.48 43513.08 00:04:21.987 00:04:22.245 19:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:22.245 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:04:22.245 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:04:22.504 true 00:04:22.504 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3480739 00:04:22.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3480739) - No such process 00:04:22.504 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3480739 00:04:22.504 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:22.504 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:22.763 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:04:22.763 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:04:22.763 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:04:22.763 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:22.763 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:04:23.021 null0 00:04:23.021 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:23.021 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:23.021 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:04:23.021 null1 00:04:23.021 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:23.021 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:23.021 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:04:23.280 null2 00:04:23.280 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:23.280 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:23.280 19:10:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:04:23.280 null3 00:04:23.280 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:23.280 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:23.280 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:04:23.539 null4 00:04:23.539 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:23.539 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:23.539 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:04:23.798 null5 00:04:23.798 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:23.798 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:23.798 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:04:23.798 null6 00:04:23.798 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:23.798 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:23.798 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:04:24.058 null7 00:04:24.058 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3487978 3487979 3487981 3487982 3487983 3487986 3487988 3487990 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.059 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:24.319 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:24.319 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:24.319 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:24.319 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:24.319 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:24.319 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:24.319 19:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.320 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.580 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:24.840 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.100 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:25.101 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:25.101 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:25.101 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:25.101 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.101 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.101 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:25.101 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:25.361 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.361 19:10:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:25.361 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.620 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.621 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:25.880 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:26.140 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.140 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:26.141 19:10:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:26.400 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.400 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.400 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:26.401 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:26.661 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:26.924 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.185 19:11:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.185 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.185 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.444 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:27.703 rmmod nvme_tcp 00:04:27.703 rmmod nvme_fabrics 00:04:27.703 rmmod nvme_keyring 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3480355 ']' 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3480355 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3480355 ']' 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3480355 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3480355 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3480355' 00:04:27.703 killing process with pid 3480355 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3480355 00:04:27.703 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3480355 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:27.961 19:11:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:29.863 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:29.863 00:04:29.863 real 0m45.766s 00:04:29.863 user 3m11.677s 00:04:29.863 sys 0m15.444s 00:04:29.863 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.863 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:29.863 ************************************ 00:04:29.863 END TEST nvmf_ns_hotplug_stress 00:04:29.863 ************************************ 00:04:29.863 19:11:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:04:29.863 19:11:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:29.863 19:11:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.863 19:11:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:29.863 ************************************ 00:04:29.864 START TEST nvmf_delete_subsystem 00:04:29.864 ************************************ 00:04:29.864 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:04:29.864 * Looking for test storage... 00:04:29.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:29.864 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:29.864 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:04:29.864 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.124 --rc genhtml_branch_coverage=1 00:04:30.124 --rc genhtml_function_coverage=1 00:04:30.124 --rc genhtml_legend=1 00:04:30.124 --rc geninfo_all_blocks=1 00:04:30.124 --rc geninfo_unexecuted_blocks=1 00:04:30.124 00:04:30.124 ' 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.124 --rc genhtml_branch_coverage=1 00:04:30.124 --rc genhtml_function_coverage=1 00:04:30.124 --rc genhtml_legend=1 00:04:30.124 --rc geninfo_all_blocks=1 00:04:30.124 --rc geninfo_unexecuted_blocks=1 00:04:30.124 00:04:30.124 ' 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.124 --rc genhtml_branch_coverage=1 00:04:30.124 --rc genhtml_function_coverage=1 00:04:30.124 --rc genhtml_legend=1 00:04:30.124 --rc geninfo_all_blocks=1 00:04:30.124 --rc geninfo_unexecuted_blocks=1 00:04:30.124 00:04:30.124 ' 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.124 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.124 --rc genhtml_branch_coverage=1 00:04:30.124 --rc genhtml_function_coverage=1 00:04:30.124 --rc genhtml_legend=1 00:04:30.124 --rc geninfo_all_blocks=1 00:04:30.124 --rc geninfo_unexecuted_blocks=1 00:04:30.124 00:04:30.124 ' 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.124 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:30.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:04:30.125 19:11:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:04:35.399 Found 0000:31:00.0 (0x8086 - 0x159b) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:04:35.399 Found 0000:31:00.1 (0x8086 - 0x159b) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:04:35.399 Found net devices under 0000:31:00.0: cvl_0_0 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:04:35.399 Found net devices under 0000:31:00.1: cvl_0_1 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:35.399 19:11:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:35.399 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:35.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:35.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:04:35.400 00:04:35.400 --- 10.0.0.2 ping statistics --- 00:04:35.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:35.400 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:35.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:35.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:04:35.400 00:04:35.400 --- 10.0.0.1 ping statistics --- 00:04:35.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:35.400 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:35.400 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3493471 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3493471 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3493471 ']' 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.659 19:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:35.659 [2024-11-26 19:11:09.312988] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:04:35.659 [2024-11-26 19:11:09.313037] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:35.659 [2024-11-26 19:11:09.395872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.659 [2024-11-26 19:11:09.433858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:35.659 [2024-11-26 19:11:09.433892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:35.659 [2024-11-26 19:11:09.433900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:35.659 [2024-11-26 19:11:09.433907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:35.659 [2024-11-26 19:11:09.433913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:35.659 [2024-11-26 19:11:09.435144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.659 [2024-11-26 19:11:09.435173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.228 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.228 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:04:36.228 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:36.228 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.228 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:36.487 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:36.487 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:04:36.487 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.487 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:36.487 [2024-11-26 19:11:10.119325] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.487 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.487 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:36.487 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.487 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:36.488 [2024-11-26 19:11:10.135522] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:36.488 NULL1 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:36.488 Delay0 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3493683 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:04:36.488 19:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:04:36.488 [2024-11-26 19:11:10.210058] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:04:38.391 19:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:04:38.391 19:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.391 19:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 [2024-11-26 19:11:12.378691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6290000c40 is same with the state(6) to be set 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 starting I/O failed: -6 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 [2024-11-26 19:11:12.379106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a2c0 is same with the state(6) to be set 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.649 Write completed with error (sct=0, sc=8) 00:04:38.649 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Write completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Write completed with error (sct=0, sc=8) 00:04:38.650 Write completed with error (sct=0, sc=8) 00:04:38.650 Write completed with error (sct=0, sc=8) 00:04:38.650 Write completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Write completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Write completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:38.650 Read completed with error (sct=0, sc=8) 00:04:39.600 [2024-11-26 19:11:13.351634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202b5f0 is same with the state(6) to be set 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 [2024-11-26 19:11:13.380828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a4a0 is same with the state(6) to be set 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 [2024-11-26 19:11:13.381006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202a0e0 is same with the state(6) to be set 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 [2024-11-26 19:11:13.381121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f629000d020 is same with the state(6) to be set 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Read completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 Write completed with error (sct=0, sc=8) 00:04:39.600 [2024-11-26 19:11:13.381519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f629000d680 is same with the state(6) to be set 00:04:39.600 Initializing NVMe Controllers 00:04:39.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:04:39.600 Controller IO queue size 128, less than required. 00:04:39.600 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:04:39.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:04:39.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:04:39.600 Initialization complete. Launching workers. 00:04:39.600 ======================================================== 00:04:39.600 Latency(us) 00:04:39.600 Device Information : IOPS MiB/s Average min max 00:04:39.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.48 0.08 977352.30 220.98 2001394.73 00:04:39.600 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 171.43 0.08 911792.94 340.39 2001481.00 00:04:39.600 ======================================================== 00:04:39.600 Total : 334.90 0.16 943794.46 220.98 2001481.00 00:04:39.600 00:04:39.600 [2024-11-26 19:11:13.381919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202b5f0 (9): Bad file descriptor 00:04:39.600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:04:39.600 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.600 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:04:39.600 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3493683 00:04:39.600 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3493683 00:04:40.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3493683) - No such process 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3493683 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3493683 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3493683 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.166 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:40.167 [2024-11-26 19:11:13.903399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3494499 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494499 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:40.167 19:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:04:40.167 [2024-11-26 19:11:13.961714] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:04:40.732 19:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:40.732 19:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494499 00:04:40.732 19:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:41.299 19:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:41.299 19:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494499 00:04:41.299 19:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:41.866 19:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:41.866 19:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494499 00:04:41.866 19:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:42.124 19:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:42.124 19:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494499 00:04:42.125 19:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:42.692 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:42.692 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494499 00:04:42.692 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:43.259 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:43.259 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494499 00:04:43.259 19:11:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:04:43.518 Initializing NVMe Controllers 00:04:43.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:04:43.518 Controller IO queue size 128, less than required. 00:04:43.518 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:04:43.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:04:43.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:04:43.518 Initialization complete. Launching workers. 00:04:43.518 ======================================================== 00:04:43.518 Latency(us) 00:04:43.518 Device Information : IOPS MiB/s Average min max 00:04:43.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002908.25 1000118.19 1006792.72 00:04:43.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002145.65 1000315.73 1042578.16 00:04:43.518 ======================================================== 00:04:43.518 Total : 256.00 0.12 1002526.95 1000118.19 1042578.16 00:04:43.518 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3494499 00:04:43.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3494499) - No such process 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3494499 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:43.777 rmmod nvme_tcp 00:04:43.777 rmmod nvme_fabrics 00:04:43.777 rmmod nvme_keyring 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3493471 ']' 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3493471 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3493471 ']' 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3493471 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3493471 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3493471' 00:04:43.777 killing process with pid 3493471 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3493471 00:04:43.777 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3493471 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:44.035 19:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:45.940 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:45.940 00:04:45.940 real 0m16.029s 00:04:45.940 user 0m29.857s 00:04:45.940 sys 0m5.168s 00:04:45.940 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.940 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:04:45.940 ************************************ 00:04:45.940 END TEST nvmf_delete_subsystem 00:04:45.940 ************************************ 00:04:45.940 19:11:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:04:45.940 19:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:45.940 19:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.940 19:11:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:45.940 ************************************ 00:04:45.940 START TEST nvmf_host_management 00:04:45.940 ************************************ 00:04:45.940 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:04:45.940 * Looking for test storage... 00:04:45.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.200 --rc genhtml_branch_coverage=1 00:04:46.200 --rc genhtml_function_coverage=1 00:04:46.200 --rc genhtml_legend=1 00:04:46.200 --rc geninfo_all_blocks=1 00:04:46.200 --rc geninfo_unexecuted_blocks=1 00:04:46.200 00:04:46.200 ' 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.200 --rc genhtml_branch_coverage=1 00:04:46.200 --rc genhtml_function_coverage=1 00:04:46.200 --rc genhtml_legend=1 00:04:46.200 --rc geninfo_all_blocks=1 00:04:46.200 --rc geninfo_unexecuted_blocks=1 00:04:46.200 00:04:46.200 ' 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.200 --rc genhtml_branch_coverage=1 00:04:46.200 --rc genhtml_function_coverage=1 00:04:46.200 --rc genhtml_legend=1 00:04:46.200 --rc geninfo_all_blocks=1 00:04:46.200 --rc geninfo_unexecuted_blocks=1 00:04:46.200 00:04:46.200 ' 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.200 --rc genhtml_branch_coverage=1 00:04:46.200 --rc genhtml_function_coverage=1 00:04:46.200 --rc genhtml_legend=1 00:04:46.200 --rc geninfo_all_blocks=1 00:04:46.200 --rc geninfo_unexecuted_blocks=1 00:04:46.200 00:04:46.200 ' 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:04:46.200 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:46.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:04:46.201 19:11:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:04:51.477 Found 0000:31:00.0 (0x8086 - 0x159b) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:04:51.477 Found 0000:31:00.1 (0x8086 - 0x159b) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:51.477 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:04:51.477 Found net devices under 0000:31:00.0: cvl_0_0 00:04:51.478 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:51.478 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:04:51.478 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:51.478 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:04:51.478 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:51.478 19:11:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:04:51.478 Found net devices under 0000:31:00.1: cvl_0_1 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:04:51.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:51.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:04:51.478 00:04:51.478 --- 10.0.0.2 ping statistics --- 00:04:51.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:51.478 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:04:51.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:51.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:04:51.478 00:04:51.478 --- 10.0.0.1 ping statistics --- 00:04:51.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:51.478 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3499645 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3499645 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3499645 ']' 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.478 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:04:51.478 [2024-11-26 19:11:25.308253] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:04:51.478 [2024-11-26 19:11:25.308304] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:51.738 [2024-11-26 19:11:25.379575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.738 [2024-11-26 19:11:25.410954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:51.738 [2024-11-26 19:11:25.410981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:51.738 [2024-11-26 19:11:25.410987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:51.738 [2024-11-26 19:11:25.410992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:51.738 [2024-11-26 19:11:25.410996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:51.738 [2024-11-26 19:11:25.412483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.738 [2024-11-26 19:11:25.412628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.738 [2024-11-26 19:11:25.412820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.738 [2024-11-26 19:11:25.412822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.738 [2024-11-26 19:11:25.520403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.738 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.738 Malloc0 00:04:51.738 [2024-11-26 19:11:25.593803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3499898 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3499898 /var/tmp/bdevperf.sock 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3499898 ']' 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:04:51.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:04:51.998 { 00:04:51.998 "params": { 00:04:51.998 "name": "Nvme$subsystem", 00:04:51.998 "trtype": "$TEST_TRANSPORT", 00:04:51.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:04:51.998 "adrfam": "ipv4", 00:04:51.998 "trsvcid": "$NVMF_PORT", 00:04:51.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:04:51.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:04:51.998 "hdgst": ${hdgst:-false}, 00:04:51.998 "ddgst": ${ddgst:-false} 00:04:51.998 }, 00:04:51.998 "method": "bdev_nvme_attach_controller" 00:04:51.998 } 00:04:51.998 EOF 00:04:51.998 )") 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:04:51.998 19:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:04:51.998 "params": { 00:04:51.998 "name": "Nvme0", 00:04:51.998 "trtype": "tcp", 00:04:51.998 "traddr": "10.0.0.2", 00:04:51.998 "adrfam": "ipv4", 00:04:51.998 "trsvcid": "4420", 00:04:51.998 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:04:51.998 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:04:51.998 "hdgst": false, 00:04:51.998 "ddgst": false 00:04:51.998 }, 00:04:51.998 "method": "bdev_nvme_attach_controller" 00:04:51.998 }' 00:04:51.998 [2024-11-26 19:11:25.666448] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:04:51.998 [2024-11-26 19:11:25.666498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499898 ] 00:04:51.998 [2024-11-26 19:11:25.744636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.998 [2024-11-26 19:11:25.781244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.258 Running I/O for 10 seconds... 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.829 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:52.829 [2024-11-26 19:11:26.520600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10011b0 is same with the state(6) to be set 00:04:52.829 [2024-11-26 19:11:26.520636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10011b0 is same with the state(6) to be set 00:04:52.829 [2024-11-26 19:11:26.520948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.520985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.829 [2024-11-26 19:11:26.521331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.829 [2024-11-26 19:11:26.521340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.521986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.521994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.830 [2024-11-26 19:11:26.522003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.830 [2024-11-26 19:11:26.522011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.831 [2024-11-26 19:11:26.522020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.831 [2024-11-26 19:11:26.522027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.831 [2024-11-26 19:11:26.522037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.831 [2024-11-26 19:11:26.522044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.831 [2024-11-26 19:11:26.522053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.831 [2024-11-26 19:11:26.522061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.831 [2024-11-26 19:11:26.522071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:04:52.831 [2024-11-26 19:11:26.522079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.831 [2024-11-26 19:11:26.522088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1798270 is same with the state(6) to be set 00:04:52.831 [2024-11-26 19:11:26.523316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:04:52.831 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.831 task offset: 109568 on job bdev=Nvme0n1 fails 00:04:52.831 00:04:52.831 Latency(us) 00:04:52.831 [2024-11-26T18:11:26.696Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:04:52.831 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:04:52.831 Job: Nvme0n1 ended in about 0.55 seconds with error 00:04:52.831 Verification LBA range: start 0x0 length 0x400 00:04:52.831 Nvme0n1 : 0.55 1508.10 94.26 116.01 0.00 38424.12 1576.96 35389.44 00:04:52.831 [2024-11-26T18:11:26.696Z] =================================================================================================================== 00:04:52.831 [2024-11-26T18:11:26.696Z] Total : 1508.10 94.26 116.01 0.00 38424.12 1576.96 35389.44 00:04:52.831 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:04:52.831 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.831 [2024-11-26 19:11:26.525330] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:52.831 [2024-11-26 19:11:26.525355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1787b10 (9): Bad file descriptor 00:04:52.831 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:52.831 [2024-11-26 19:11:26.526523] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:04:52.831 [2024-11-26 19:11:26.526583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:04:52.831 [2024-11-26 19:11:26.526605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:04:52.831 [2024-11-26 19:11:26.526617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:04:52.831 [2024-11-26 19:11:26.526628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:04:52.831 [2024-11-26 19:11:26.526636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:04:52.831 [2024-11-26 19:11:26.526643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1787b10 00:04:52.831 [2024-11-26 19:11:26.526662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1787b10 (9): Bad file descriptor 00:04:52.831 [2024-11-26 19:11:26.526675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:04:52.831 [2024-11-26 19:11:26.526682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:04:52.831 [2024-11-26 19:11:26.526691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:04:52.831 [2024-11-26 19:11:26.526700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:04:52.831 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.831 19:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3499898 00:04:53.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3499898) - No such process 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:04:53.770 { 00:04:53.770 "params": { 00:04:53.770 "name": "Nvme$subsystem", 00:04:53.770 "trtype": "$TEST_TRANSPORT", 00:04:53.770 "traddr": "$NVMF_FIRST_TARGET_IP", 00:04:53.770 "adrfam": "ipv4", 00:04:53.770 "trsvcid": "$NVMF_PORT", 00:04:53.770 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:04:53.770 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:04:53.770 "hdgst": ${hdgst:-false}, 00:04:53.770 "ddgst": ${ddgst:-false} 00:04:53.770 }, 00:04:53.770 "method": "bdev_nvme_attach_controller" 00:04:53.770 } 00:04:53.770 EOF 00:04:53.770 )") 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:04:53.770 19:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:04:53.770 "params": { 00:04:53.770 "name": "Nvme0", 00:04:53.770 "trtype": "tcp", 00:04:53.770 "traddr": "10.0.0.2", 00:04:53.770 "adrfam": "ipv4", 00:04:53.770 "trsvcid": "4420", 00:04:53.770 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:04:53.770 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:04:53.770 "hdgst": false, 00:04:53.770 "ddgst": false 00:04:53.770 }, 00:04:53.770 "method": "bdev_nvme_attach_controller" 00:04:53.770 }' 00:04:53.770 [2024-11-26 19:11:27.571848] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:04:53.770 [2024-11-26 19:11:27.571900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3500257 ] 00:04:54.029 [2024-11-26 19:11:27.649229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.029 [2024-11-26 19:11:27.685199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.288 Running I/O for 1 seconds... 00:04:55.224 1792.00 IOPS, 112.00 MiB/s 00:04:55.224 Latency(us) 00:04:55.224 [2024-11-26T18:11:29.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:04:55.224 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:04:55.224 Verification LBA range: start 0x0 length 0x400 00:04:55.224 Nvme0n1 : 1.02 1813.16 113.32 0.00 0.00 34605.75 2744.32 32112.64 00:04:55.224 [2024-11-26T18:11:29.089Z] =================================================================================================================== 00:04:55.224 [2024-11-26T18:11:29.089Z] Total : 1813.16 113.32 0.00 0.00 34605.75 2744.32 32112.64 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:04:55.483 rmmod nvme_tcp 00:04:55.483 rmmod nvme_fabrics 00:04:55.483 rmmod nvme_keyring 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3499645 ']' 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3499645 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3499645 ']' 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3499645 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3499645 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3499645' 00:04:55.483 killing process with pid 3499645 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3499645 00:04:55.483 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3499645 00:04:55.483 [2024-11-26 19:11:29.338851] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:55.741 19:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:04:57.649 00:04:57.649 real 0m11.668s 00:04:57.649 user 0m19.599s 00:04:57.649 sys 0m4.876s 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:04:57.649 ************************************ 00:04:57.649 END TEST nvmf_host_management 00:04:57.649 ************************************ 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:57.649 ************************************ 00:04:57.649 START TEST nvmf_lvol 00:04:57.649 ************************************ 00:04:57.649 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:04:57.649 * Looking for test storage... 00:04:57.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.909 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.909 --rc genhtml_branch_coverage=1 00:04:57.909 --rc genhtml_function_coverage=1 00:04:57.909 --rc genhtml_legend=1 00:04:57.909 --rc geninfo_all_blocks=1 00:04:57.909 --rc geninfo_unexecuted_blocks=1 00:04:57.909 00:04:57.909 ' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.910 --rc genhtml_branch_coverage=1 00:04:57.910 --rc genhtml_function_coverage=1 00:04:57.910 --rc genhtml_legend=1 00:04:57.910 --rc geninfo_all_blocks=1 00:04:57.910 --rc geninfo_unexecuted_blocks=1 00:04:57.910 00:04:57.910 ' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.910 --rc genhtml_branch_coverage=1 00:04:57.910 --rc genhtml_function_coverage=1 00:04:57.910 --rc genhtml_legend=1 00:04:57.910 --rc geninfo_all_blocks=1 00:04:57.910 --rc geninfo_unexecuted_blocks=1 00:04:57.910 00:04:57.910 ' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.910 --rc genhtml_branch_coverage=1 00:04:57.910 --rc genhtml_function_coverage=1 00:04:57.910 --rc genhtml_legend=1 00:04:57.910 --rc geninfo_all_blocks=1 00:04:57.910 --rc geninfo_unexecuted_blocks=1 00:04:57.910 00:04:57.910 ' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:57.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:04:57.910 19:11:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:03.347 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:03.347 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:03.347 Found net devices under 0000:31:00.0: cvl_0_0 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:03.347 Found net devices under 0000:31:00.1: cvl_0_1 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:05:03.347 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:03.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:03.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:05:03.348 00:05:03.348 --- 10.0.0.2 ping statistics --- 00:05:03.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:03.348 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:03.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:03.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:05:03.348 00:05:03.348 --- 10.0.0.1 ping statistics --- 00:05:03.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:03.348 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3504955 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3504955 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3504955 ']' 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.348 19:11:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:03.348 [2024-11-26 19:11:37.038714] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:05:03.348 [2024-11-26 19:11:37.038778] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:03.348 [2024-11-26 19:11:37.128594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.348 [2024-11-26 19:11:37.180835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:03.348 [2024-11-26 19:11:37.180884] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:03.348 [2024-11-26 19:11:37.180892] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:03.348 [2024-11-26 19:11:37.180900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:03.348 [2024-11-26 19:11:37.180906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:03.348 [2024-11-26 19:11:37.183024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.348 [2024-11-26 19:11:37.183195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.348 [2024-11-26 19:11:37.183196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.286 19:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.286 19:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:05:04.286 19:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:04.286 19:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.286 19:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:04.286 19:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:04.286 19:11:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:04.286 [2024-11-26 19:11:37.990113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.286 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:05:04.546 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:05:04.546 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:05:04.546 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:05:04.546 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:05:04.804 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:05:05.064 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=ac6c9862-ad55-41ce-a3b1-51c8c1450b8a 00:05:05.064 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ac6c9862-ad55-41ce-a3b1-51c8c1450b8a lvol 20 00:05:05.064 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3aaea03d-994b-43a2-a12d-c68d99c10bb3 00:05:05.064 19:11:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:05.324 19:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3aaea03d-994b-43a2-a12d-c68d99c10bb3 00:05:05.324 19:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:05.582 [2024-11-26 19:11:39.318422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:05.582 19:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:05.841 19:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3505650 00:05:05.841 19:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:05:05.841 19:11:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:05:06.780 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3aaea03d-994b-43a2-a12d-c68d99c10bb3 MY_SNAPSHOT 00:05:07.039 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b6723959-9935-44a8-bd49-2d5927166188 00:05:07.039 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3aaea03d-994b-43a2-a12d-c68d99c10bb3 30 00:05:07.039 19:11:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b6723959-9935-44a8-bd49-2d5927166188 MY_CLONE 00:05:07.298 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9658741d-cb10-4e1e-b440-aa6c1eaa56ed 00:05:07.298 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 9658741d-cb10-4e1e-b440-aa6c1eaa56ed 00:05:07.557 19:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3505650 00:05:17.536 Initializing NVMe Controllers 00:05:17.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:17.536 Controller IO queue size 128, less than required. 00:05:17.536 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:17.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:05:17.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:05:17.536 Initialization complete. Launching workers. 00:05:17.536 ======================================================== 00:05:17.536 Latency(us) 00:05:17.536 Device Information : IOPS MiB/s Average min max 00:05:17.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16613.80 64.90 7706.64 1803.77 33903.34 00:05:17.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17269.10 67.46 7414.38 1832.43 36622.83 00:05:17.536 ======================================================== 00:05:17.536 Total : 33882.89 132.36 7557.68 1803.77 36622.83 00:05:17.536 00:05:17.536 19:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:17.536 19:11:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3aaea03d-994b-43a2-a12d-c68d99c10bb3 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ac6c9862-ad55-41ce-a3b1-51c8c1450b8a 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:17.536 rmmod nvme_tcp 00:05:17.536 rmmod nvme_fabrics 00:05:17.536 rmmod nvme_keyring 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3504955 ']' 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3504955 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3504955 ']' 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3504955 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3504955 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3504955' 00:05:17.536 killing process with pid 3504955 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3504955 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3504955 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:17.536 19:11:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:18.916 00:05:18.916 real 0m21.070s 00:05:18.916 user 1m1.930s 00:05:18.916 sys 0m6.811s 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:05:18.916 ************************************ 00:05:18.916 END TEST nvmf_lvol 00:05:18.916 ************************************ 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:18.916 ************************************ 00:05:18.916 START TEST nvmf_lvs_grow 00:05:18.916 ************************************ 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:05:18.916 * Looking for test storage... 00:05:18.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.916 --rc genhtml_branch_coverage=1 00:05:18.916 --rc genhtml_function_coverage=1 00:05:18.916 --rc genhtml_legend=1 00:05:18.916 --rc geninfo_all_blocks=1 00:05:18.916 --rc geninfo_unexecuted_blocks=1 00:05:18.916 00:05:18.916 ' 00:05:18.916 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.916 --rc genhtml_branch_coverage=1 00:05:18.916 --rc genhtml_function_coverage=1 00:05:18.916 --rc genhtml_legend=1 00:05:18.917 --rc geninfo_all_blocks=1 00:05:18.917 --rc geninfo_unexecuted_blocks=1 00:05:18.917 00:05:18.917 ' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.917 --rc genhtml_branch_coverage=1 00:05:18.917 --rc genhtml_function_coverage=1 00:05:18.917 --rc genhtml_legend=1 00:05:18.917 --rc geninfo_all_blocks=1 00:05:18.917 --rc geninfo_unexecuted_blocks=1 00:05:18.917 00:05:18.917 ' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.917 --rc genhtml_branch_coverage=1 00:05:18.917 --rc genhtml_function_coverage=1 00:05:18.917 --rc genhtml_legend=1 00:05:18.917 --rc geninfo_all_blocks=1 00:05:18.917 --rc geninfo_unexecuted_blocks=1 00:05:18.917 00:05:18.917 ' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:05:18.917 19:11:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:24.195 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:24.195 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:24.195 Found net devices under 0000:31:00.0: cvl_0_0 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:24.195 Found net devices under 0000:31:00.1: cvl_0_1 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:24.195 19:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:24.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:24.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:05:24.455 00:05:24.455 --- 10.0.0.2 ping statistics --- 00:05:24.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.455 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:24.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:24.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:05:24.455 00:05:24.455 --- 10.0.0.1 ping statistics --- 00:05:24.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:24.455 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3512365 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3512365 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3512365 ']' 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:24.455 [2024-11-26 19:11:58.142966] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:05:24.455 [2024-11-26 19:11:58.143015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:24.455 [2024-11-26 19:11:58.212447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.455 [2024-11-26 19:11:58.241334] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:24.455 [2024-11-26 19:11:58.241361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:24.455 [2024-11-26 19:11:58.241369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:24.455 [2024-11-26 19:11:58.241374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:24.455 [2024-11-26 19:11:58.241378] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:24.455 [2024-11-26 19:11:58.241856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.455 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:24.715 [2024-11-26 19:11:58.477617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:24.715 ************************************ 00:05:24.715 START TEST lvs_grow_clean 00:05:24.715 ************************************ 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:24.715 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:05:24.974 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:05:24.974 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:05:25.233 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:25.233 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:25.233 19:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:05:25.233 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:05:25.233 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:05:25.233 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 lvol 150 00:05:25.493 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=cb73214a-1652-4b83-a0de-a7d884ae8e37 00:05:25.494 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:25.494 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:05:25.494 [2024-11-26 19:11:59.355614] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:05:25.494 [2024-11-26 19:11:59.355653] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:05:25.757 true 00:05:25.757 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:25.757 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:05:25.757 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:05:25.757 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:26.016 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb73214a-1652-4b83-a0de-a7d884ae8e37 00:05:26.016 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:26.275 [2024-11-26 19:11:59.969434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:26.275 19:11:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:26.534 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3512967 00:05:26.534 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:26.534 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3512967 /var/tmp/bdevperf.sock 00:05:26.535 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:05:26.535 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3512967 ']' 00:05:26.535 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:26.535 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.535 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:26.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:26.535 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.535 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:05:26.535 [2024-11-26 19:12:00.171352] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:05:26.535 [2024-11-26 19:12:00.171405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3512967 ] 00:05:26.535 [2024-11-26 19:12:00.249595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.535 [2024-11-26 19:12:00.285616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.103 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.103 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:05:27.103 19:12:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:05:27.671 Nvme0n1 00:05:27.671 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:05:27.671 [ 00:05:27.671 { 00:05:27.671 "name": "Nvme0n1", 00:05:27.671 "aliases": [ 00:05:27.671 "cb73214a-1652-4b83-a0de-a7d884ae8e37" 00:05:27.671 ], 00:05:27.671 "product_name": "NVMe disk", 00:05:27.671 "block_size": 4096, 00:05:27.671 "num_blocks": 38912, 00:05:27.672 "uuid": "cb73214a-1652-4b83-a0de-a7d884ae8e37", 00:05:27.672 "numa_id": 0, 00:05:27.672 "assigned_rate_limits": { 00:05:27.672 "rw_ios_per_sec": 0, 00:05:27.672 "rw_mbytes_per_sec": 0, 00:05:27.672 "r_mbytes_per_sec": 0, 00:05:27.672 "w_mbytes_per_sec": 0 00:05:27.672 }, 00:05:27.672 "claimed": false, 00:05:27.672 "zoned": false, 00:05:27.672 "supported_io_types": { 00:05:27.672 "read": true, 00:05:27.672 "write": true, 00:05:27.672 "unmap": true, 00:05:27.672 "flush": true, 00:05:27.672 "reset": true, 00:05:27.672 "nvme_admin": true, 00:05:27.672 "nvme_io": true, 00:05:27.672 "nvme_io_md": false, 00:05:27.672 "write_zeroes": true, 00:05:27.672 "zcopy": false, 00:05:27.672 "get_zone_info": false, 00:05:27.672 "zone_management": false, 00:05:27.672 "zone_append": false, 00:05:27.672 "compare": true, 00:05:27.672 "compare_and_write": true, 00:05:27.672 "abort": true, 00:05:27.672 "seek_hole": false, 00:05:27.672 "seek_data": false, 00:05:27.672 "copy": true, 00:05:27.672 "nvme_iov_md": false 00:05:27.672 }, 00:05:27.672 "memory_domains": [ 00:05:27.672 { 00:05:27.672 "dma_device_id": "system", 00:05:27.672 "dma_device_type": 1 00:05:27.672 } 00:05:27.672 ], 00:05:27.672 "driver_specific": { 00:05:27.672 "nvme": [ 00:05:27.672 { 00:05:27.672 "trid": { 00:05:27.672 "trtype": "TCP", 00:05:27.672 "adrfam": "IPv4", 00:05:27.672 "traddr": "10.0.0.2", 00:05:27.672 "trsvcid": "4420", 00:05:27.672 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:05:27.672 }, 00:05:27.672 "ctrlr_data": { 00:05:27.672 "cntlid": 1, 00:05:27.672 "vendor_id": "0x8086", 00:05:27.672 "model_number": "SPDK bdev Controller", 00:05:27.672 "serial_number": "SPDK0", 00:05:27.672 "firmware_revision": "25.01", 00:05:27.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:27.672 "oacs": { 00:05:27.672 "security": 0, 00:05:27.672 "format": 0, 00:05:27.672 "firmware": 0, 00:05:27.672 "ns_manage": 0 00:05:27.672 }, 00:05:27.672 "multi_ctrlr": true, 00:05:27.672 "ana_reporting": false 00:05:27.672 }, 00:05:27.672 "vs": { 00:05:27.672 "nvme_version": "1.3" 00:05:27.672 }, 00:05:27.672 "ns_data": { 00:05:27.672 "id": 1, 00:05:27.672 "can_share": true 00:05:27.672 } 00:05:27.672 } 00:05:27.672 ], 00:05:27.672 "mp_policy": "active_passive" 00:05:27.672 } 00:05:27.672 } 00:05:27.672 ] 00:05:27.672 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3513310 00:05:27.672 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:05:27.672 19:12:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:05:27.931 Running I/O for 10 seconds... 00:05:28.870 Latency(us) 00:05:28.870 [2024-11-26T18:12:02.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:28.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:28.870 Nvme0n1 : 1.00 25106.00 98.07 0.00 0.00 0.00 0.00 0.00 00:05:28.870 [2024-11-26T18:12:02.735Z] =================================================================================================================== 00:05:28.870 [2024-11-26T18:12:02.735Z] Total : 25106.00 98.07 0.00 0.00 0.00 0.00 0.00 00:05:28.870 00:05:29.809 19:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:29.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:29.809 Nvme0n1 : 2.00 25224.50 98.53 0.00 0.00 0.00 0.00 0.00 00:05:29.809 [2024-11-26T18:12:03.674Z] =================================================================================================================== 00:05:29.809 [2024-11-26T18:12:03.674Z] Total : 25224.50 98.53 0.00 0.00 0.00 0.00 0.00 00:05:29.809 00:05:30.067 true 00:05:30.067 19:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:30.067 19:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:05:30.067 19:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:05:30.067 19:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:05:30.067 19:12:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3513310 00:05:31.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:31.003 Nvme0n1 : 3.00 25285.67 98.77 0.00 0.00 0.00 0.00 0.00 00:05:31.003 [2024-11-26T18:12:04.868Z] =================================================================================================================== 00:05:31.003 [2024-11-26T18:12:04.868Z] Total : 25285.67 98.77 0.00 0.00 0.00 0.00 0.00 00:05:31.003 00:05:31.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:31.942 Nvme0n1 : 4.00 25332.00 98.95 0.00 0.00 0.00 0.00 0.00 00:05:31.942 [2024-11-26T18:12:05.807Z] =================================================================================================================== 00:05:31.942 [2024-11-26T18:12:05.808Z] Total : 25332.00 98.95 0.00 0.00 0.00 0.00 0.00 00:05:31.943 00:05:32.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:32.880 Nvme0n1 : 5.00 25359.60 99.06 0.00 0.00 0.00 0.00 0.00 00:05:32.880 [2024-11-26T18:12:06.745Z] =================================================================================================================== 00:05:32.880 [2024-11-26T18:12:06.745Z] Total : 25359.60 99.06 0.00 0.00 0.00 0.00 0.00 00:05:32.880 00:05:33.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:33.815 Nvme0n1 : 6.00 25377.00 99.13 0.00 0.00 0.00 0.00 0.00 00:05:33.815 [2024-11-26T18:12:07.680Z] =================================================================================================================== 00:05:33.815 [2024-11-26T18:12:07.680Z] Total : 25377.00 99.13 0.00 0.00 0.00 0.00 0.00 00:05:33.815 00:05:34.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:34.751 Nvme0n1 : 7.00 25399.43 99.22 0.00 0.00 0.00 0.00 0.00 00:05:34.751 [2024-11-26T18:12:08.616Z] =================================================================================================================== 00:05:34.751 [2024-11-26T18:12:08.616Z] Total : 25399.43 99.22 0.00 0.00 0.00 0.00 0.00 00:05:34.751 00:05:36.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:36.147 Nvme0n1 : 8.00 25416.50 99.28 0.00 0.00 0.00 0.00 0.00 00:05:36.147 [2024-11-26T18:12:10.012Z] =================================================================================================================== 00:05:36.147 [2024-11-26T18:12:10.012Z] Total : 25416.50 99.28 0.00 0.00 0.00 0.00 0.00 00:05:36.147 00:05:37.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:37.085 Nvme0n1 : 9.00 25428.89 99.33 0.00 0.00 0.00 0.00 0.00 00:05:37.085 [2024-11-26T18:12:10.950Z] =================================================================================================================== 00:05:37.085 [2024-11-26T18:12:10.950Z] Total : 25428.89 99.33 0.00 0.00 0.00 0.00 0.00 00:05:37.085 00:05:38.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:38.023 Nvme0n1 : 10.00 25439.30 99.37 0.00 0.00 0.00 0.00 0.00 00:05:38.023 [2024-11-26T18:12:11.888Z] =================================================================================================================== 00:05:38.023 [2024-11-26T18:12:11.888Z] Total : 25439.30 99.37 0.00 0.00 0.00 0.00 0.00 00:05:38.023 00:05:38.023 00:05:38.023 Latency(us) 00:05:38.023 [2024-11-26T18:12:11.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:38.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:38.023 Nvme0n1 : 10.01 25439.01 99.37 0.00 0.00 5028.34 2553.17 8956.59 00:05:38.023 [2024-11-26T18:12:11.888Z] =================================================================================================================== 00:05:38.023 [2024-11-26T18:12:11.888Z] Total : 25439.01 99.37 0.00 0.00 5028.34 2553.17 8956.59 00:05:38.023 { 00:05:38.023 "results": [ 00:05:38.023 { 00:05:38.023 "job": "Nvme0n1", 00:05:38.023 "core_mask": "0x2", 00:05:38.023 "workload": "randwrite", 00:05:38.023 "status": "finished", 00:05:38.023 "queue_depth": 128, 00:05:38.023 "io_size": 4096, 00:05:38.023 "runtime": 10.005147, 00:05:38.023 "iops": 25439.006543332147, 00:05:38.023 "mibps": 99.3711193098912, 00:05:38.023 "io_failed": 0, 00:05:38.023 "io_timeout": 0, 00:05:38.023 "avg_latency_us": 5028.335918424545, 00:05:38.023 "min_latency_us": 2553.173333333333, 00:05:38.024 "max_latency_us": 8956.586666666666 00:05:38.024 } 00:05:38.024 ], 00:05:38.024 "core_count": 1 00:05:38.024 } 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3512967 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3512967 ']' 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3512967 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3512967 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3512967' 00:05:38.024 killing process with pid 3512967 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3512967 00:05:38.024 Received shutdown signal, test time was about 10.000000 seconds 00:05:38.024 00:05:38.024 Latency(us) 00:05:38.024 [2024-11-26T18:12:11.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:38.024 [2024-11-26T18:12:11.889Z] =================================================================================================================== 00:05:38.024 [2024-11-26T18:12:11.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3512967 00:05:38.024 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:38.283 19:12:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:38.283 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:05:38.283 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:38.542 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:05:38.542 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:05:38.542 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:05:38.801 [2024-11-26 19:12:12.416537] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:38.801 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:38.802 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:38.802 request: 00:05:38.802 { 00:05:38.802 "uuid": "99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81", 00:05:38.802 "method": "bdev_lvol_get_lvstores", 00:05:38.802 "req_id": 1 00:05:38.802 } 00:05:38.802 Got JSON-RPC error response 00:05:38.802 response: 00:05:38.802 { 00:05:38.802 "code": -19, 00:05:38.802 "message": "No such device" 00:05:38.802 } 00:05:38.802 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:05:38.802 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:38.802 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:38.802 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:38.802 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:05:39.061 aio_bdev 00:05:39.061 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev cb73214a-1652-4b83-a0de-a7d884ae8e37 00:05:39.061 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=cb73214a-1652-4b83-a0de-a7d884ae8e37 00:05:39.061 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:39.062 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:05:39.062 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:39.062 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:39.062 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:05:39.062 19:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cb73214a-1652-4b83-a0de-a7d884ae8e37 -t 2000 00:05:39.321 [ 00:05:39.321 { 00:05:39.321 "name": "cb73214a-1652-4b83-a0de-a7d884ae8e37", 00:05:39.321 "aliases": [ 00:05:39.321 "lvs/lvol" 00:05:39.321 ], 00:05:39.321 "product_name": "Logical Volume", 00:05:39.321 "block_size": 4096, 00:05:39.321 "num_blocks": 38912, 00:05:39.321 "uuid": "cb73214a-1652-4b83-a0de-a7d884ae8e37", 00:05:39.321 "assigned_rate_limits": { 00:05:39.321 "rw_ios_per_sec": 0, 00:05:39.321 "rw_mbytes_per_sec": 0, 00:05:39.321 "r_mbytes_per_sec": 0, 00:05:39.321 "w_mbytes_per_sec": 0 00:05:39.321 }, 00:05:39.321 "claimed": false, 00:05:39.321 "zoned": false, 00:05:39.321 "supported_io_types": { 00:05:39.321 "read": true, 00:05:39.321 "write": true, 00:05:39.321 "unmap": true, 00:05:39.321 "flush": false, 00:05:39.321 "reset": true, 00:05:39.321 "nvme_admin": false, 00:05:39.321 "nvme_io": false, 00:05:39.321 "nvme_io_md": false, 00:05:39.321 "write_zeroes": true, 00:05:39.321 "zcopy": false, 00:05:39.321 "get_zone_info": false, 00:05:39.321 "zone_management": false, 00:05:39.321 "zone_append": false, 00:05:39.321 "compare": false, 00:05:39.321 "compare_and_write": false, 00:05:39.321 "abort": false, 00:05:39.321 "seek_hole": true, 00:05:39.321 "seek_data": true, 00:05:39.321 "copy": false, 00:05:39.321 "nvme_iov_md": false 00:05:39.321 }, 00:05:39.321 "driver_specific": { 00:05:39.321 "lvol": { 00:05:39.321 "lvol_store_uuid": "99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81", 00:05:39.321 "base_bdev": "aio_bdev", 00:05:39.321 "thin_provision": false, 00:05:39.321 "num_allocated_clusters": 38, 00:05:39.321 "snapshot": false, 00:05:39.321 "clone": false, 00:05:39.321 "esnap_clone": false 00:05:39.321 } 00:05:39.321 } 00:05:39.321 } 00:05:39.321 ] 00:05:39.321 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:05:39.321 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:39.321 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:05:39.580 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:05:39.580 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:05:39.580 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:39.580 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:05:39.580 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb73214a-1652-4b83-a0de-a7d884ae8e37 00:05:39.839 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99424f6a-bd8a-4ca2-ba5c-cedd0c2ffc81 00:05:40.098 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:05:40.098 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:40.098 00:05:40.098 real 0m15.398s 00:05:40.098 user 0m15.091s 00:05:40.098 sys 0m1.146s 00:05:40.098 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.098 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:05:40.098 ************************************ 00:05:40.098 END TEST lvs_grow_clean 00:05:40.098 ************************************ 00:05:40.098 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:05:40.098 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:40.098 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.098 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:40.357 ************************************ 00:05:40.357 START TEST lvs_grow_dirty 00:05:40.357 ************************************ 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:40.357 19:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:05:40.357 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:05:40.357 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:05:40.617 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:40.617 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:40.617 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:05:40.617 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:05:40.617 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:05:40.617 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa713f10-8b34-4f7b-8395-ab2299961c25 lvol 150 00:05:40.877 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d812ec1-6a02-4215-98b5-1b5fcef89ae0 00:05:40.877 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:40.877 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:05:41.136 [2024-11-26 19:12:14.756627] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:05:41.136 [2024-11-26 19:12:14.756667] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:05:41.136 true 00:05:41.136 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:41.136 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:05:41.136 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:05:41.136 19:12:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:41.396 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d812ec1-6a02-4215-98b5-1b5fcef89ae0 00:05:41.396 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:41.655 [2024-11-26 19:12:15.374431] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:41.655 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3517062 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3517062 /var/tmp/bdevperf.sock 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3517062 ']' 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:05:41.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:05:41.914 [2024-11-26 19:12:15.572617] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:05:41.914 [2024-11-26 19:12:15.572669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3517062 ] 00:05:41.914 [2024-11-26 19:12:15.636041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.914 [2024-11-26 19:12:15.665752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:05:41.914 19:12:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:05:42.482 Nvme0n1 00:05:42.482 19:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:05:42.482 [ 00:05:42.482 { 00:05:42.482 "name": "Nvme0n1", 00:05:42.482 "aliases": [ 00:05:42.482 "0d812ec1-6a02-4215-98b5-1b5fcef89ae0" 00:05:42.482 ], 00:05:42.482 "product_name": "NVMe disk", 00:05:42.482 "block_size": 4096, 00:05:42.482 "num_blocks": 38912, 00:05:42.482 "uuid": "0d812ec1-6a02-4215-98b5-1b5fcef89ae0", 00:05:42.482 "numa_id": 0, 00:05:42.482 "assigned_rate_limits": { 00:05:42.482 "rw_ios_per_sec": 0, 00:05:42.482 "rw_mbytes_per_sec": 0, 00:05:42.482 "r_mbytes_per_sec": 0, 00:05:42.482 "w_mbytes_per_sec": 0 00:05:42.482 }, 00:05:42.482 "claimed": false, 00:05:42.482 "zoned": false, 00:05:42.482 "supported_io_types": { 00:05:42.482 "read": true, 00:05:42.482 "write": true, 00:05:42.482 "unmap": true, 00:05:42.482 "flush": true, 00:05:42.482 "reset": true, 00:05:42.482 "nvme_admin": true, 00:05:42.482 "nvme_io": true, 00:05:42.482 "nvme_io_md": false, 00:05:42.482 "write_zeroes": true, 00:05:42.482 "zcopy": false, 00:05:42.482 "get_zone_info": false, 00:05:42.482 "zone_management": false, 00:05:42.482 "zone_append": false, 00:05:42.482 "compare": true, 00:05:42.482 "compare_and_write": true, 00:05:42.482 "abort": true, 00:05:42.482 "seek_hole": false, 00:05:42.482 "seek_data": false, 00:05:42.482 "copy": true, 00:05:42.482 "nvme_iov_md": false 00:05:42.482 }, 00:05:42.482 "memory_domains": [ 00:05:42.482 { 00:05:42.482 "dma_device_id": "system", 00:05:42.482 "dma_device_type": 1 00:05:42.482 } 00:05:42.482 ], 00:05:42.482 "driver_specific": { 00:05:42.482 "nvme": [ 00:05:42.482 { 00:05:42.482 "trid": { 00:05:42.482 "trtype": "TCP", 00:05:42.482 "adrfam": "IPv4", 00:05:42.482 "traddr": "10.0.0.2", 00:05:42.482 "trsvcid": "4420", 00:05:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:05:42.482 }, 00:05:42.482 "ctrlr_data": { 00:05:42.482 "cntlid": 1, 00:05:42.482 "vendor_id": "0x8086", 00:05:42.482 "model_number": "SPDK bdev Controller", 00:05:42.482 "serial_number": "SPDK0", 00:05:42.482 "firmware_revision": "25.01", 00:05:42.482 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:05:42.482 "oacs": { 00:05:42.482 "security": 0, 00:05:42.482 "format": 0, 00:05:42.482 "firmware": 0, 00:05:42.482 "ns_manage": 0 00:05:42.482 }, 00:05:42.482 "multi_ctrlr": true, 00:05:42.482 "ana_reporting": false 00:05:42.482 }, 00:05:42.482 "vs": { 00:05:42.482 "nvme_version": "1.3" 00:05:42.482 }, 00:05:42.482 "ns_data": { 00:05:42.483 "id": 1, 00:05:42.483 "can_share": true 00:05:42.483 } 00:05:42.483 } 00:05:42.483 ], 00:05:42.483 "mp_policy": "active_passive" 00:05:42.483 } 00:05:42.483 } 00:05:42.483 ] 00:05:42.483 19:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3517074 00:05:42.483 19:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:05:42.483 19:12:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:05:42.483 Running I/O for 10 seconds... 00:05:43.861 Latency(us) 00:05:43.861 [2024-11-26T18:12:17.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:43.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:43.862 Nvme0n1 : 1.00 25038.00 97.80 0.00 0.00 0.00 0.00 0.00 00:05:43.862 [2024-11-26T18:12:17.727Z] =================================================================================================================== 00:05:43.862 [2024-11-26T18:12:17.727Z] Total : 25038.00 97.80 0.00 0.00 0.00 0.00 0.00 00:05:43.862 00:05:44.430 19:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:44.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:44.689 Nvme0n1 : 2.00 25190.00 98.40 0.00 0.00 0.00 0.00 0.00 00:05:44.689 [2024-11-26T18:12:18.554Z] =================================================================================================================== 00:05:44.689 [2024-11-26T18:12:18.554Z] Total : 25190.00 98.40 0.00 0.00 0.00 0.00 0.00 00:05:44.689 00:05:44.689 true 00:05:44.689 19:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:44.689 19:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:05:44.947 19:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:05:44.947 19:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:05:44.947 19:12:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3517074 00:05:45.516 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:45.516 Nvme0n1 : 3.00 25280.67 98.75 0.00 0.00 0.00 0.00 0.00 00:05:45.516 [2024-11-26T18:12:19.381Z] =================================================================================================================== 00:05:45.516 [2024-11-26T18:12:19.381Z] Total : 25280.67 98.75 0.00 0.00 0.00 0.00 0.00 00:05:45.516 00:05:46.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:46.453 Nvme0n1 : 4.00 25310.75 98.87 0.00 0.00 0.00 0.00 0.00 00:05:46.453 [2024-11-26T18:12:20.318Z] =================================================================================================================== 00:05:46.453 [2024-11-26T18:12:20.318Z] Total : 25310.75 98.87 0.00 0.00 0.00 0.00 0.00 00:05:46.453 00:05:47.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:47.833 Nvme0n1 : 5.00 25355.60 99.05 0.00 0.00 0.00 0.00 0.00 00:05:47.833 [2024-11-26T18:12:21.698Z] =================================================================================================================== 00:05:47.833 [2024-11-26T18:12:21.698Z] Total : 25355.60 99.05 0.00 0.00 0.00 0.00 0.00 00:05:47.833 00:05:48.772 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:48.772 Nvme0n1 : 6.00 25385.67 99.16 0.00 0.00 0.00 0.00 0.00 00:05:48.772 [2024-11-26T18:12:22.637Z] =================================================================================================================== 00:05:48.772 [2024-11-26T18:12:22.637Z] Total : 25385.67 99.16 0.00 0.00 0.00 0.00 0.00 00:05:48.772 00:05:49.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:49.711 Nvme0n1 : 7.00 25406.86 99.25 0.00 0.00 0.00 0.00 0.00 00:05:49.711 [2024-11-26T18:12:23.576Z] =================================================================================================================== 00:05:49.711 [2024-11-26T18:12:23.576Z] Total : 25406.86 99.25 0.00 0.00 0.00 0.00 0.00 00:05:49.711 00:05:50.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:50.658 Nvme0n1 : 8.00 25423.00 99.31 0.00 0.00 0.00 0.00 0.00 00:05:50.658 [2024-11-26T18:12:24.523Z] =================================================================================================================== 00:05:50.658 [2024-11-26T18:12:24.523Z] Total : 25423.00 99.31 0.00 0.00 0.00 0.00 0.00 00:05:50.658 00:05:51.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:51.596 Nvme0n1 : 9.00 25442.44 99.38 0.00 0.00 0.00 0.00 0.00 00:05:51.596 [2024-11-26T18:12:25.461Z] =================================================================================================================== 00:05:51.596 [2024-11-26T18:12:25.461Z] Total : 25442.44 99.38 0.00 0.00 0.00 0.00 0.00 00:05:51.596 00:05:52.554 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:52.555 Nvme0n1 : 10.00 25454.60 99.43 0.00 0.00 0.00 0.00 0.00 00:05:52.555 [2024-11-26T18:12:26.420Z] =================================================================================================================== 00:05:52.555 [2024-11-26T18:12:26.420Z] Total : 25454.60 99.43 0.00 0.00 0.00 0.00 0.00 00:05:52.555 00:05:52.555 00:05:52.555 Latency(us) 00:05:52.555 [2024-11-26T18:12:26.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:52.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:05:52.555 Nvme0n1 : 10.00 25454.29 99.43 0.00 0.00 5025.51 3003.73 10158.08 00:05:52.555 [2024-11-26T18:12:26.420Z] =================================================================================================================== 00:05:52.555 [2024-11-26T18:12:26.420Z] Total : 25454.29 99.43 0.00 0.00 5025.51 3003.73 10158.08 00:05:52.555 { 00:05:52.555 "results": [ 00:05:52.555 { 00:05:52.555 "job": "Nvme0n1", 00:05:52.555 "core_mask": "0x2", 00:05:52.555 "workload": "randwrite", 00:05:52.555 "status": "finished", 00:05:52.555 "queue_depth": 128, 00:05:52.555 "io_size": 4096, 00:05:52.555 "runtime": 10.003813, 00:05:52.555 "iops": 25454.294277591955, 00:05:52.555 "mibps": 99.43083702184357, 00:05:52.555 "io_failed": 0, 00:05:52.555 "io_timeout": 0, 00:05:52.555 "avg_latency_us": 5025.512408000837, 00:05:52.555 "min_latency_us": 3003.733333333333, 00:05:52.555 "max_latency_us": 10158.08 00:05:52.555 } 00:05:52.555 ], 00:05:52.555 "core_count": 1 00:05:52.555 } 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3517062 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3517062 ']' 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3517062 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3517062 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3517062' 00:05:52.555 killing process with pid 3517062 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3517062 00:05:52.555 Received shutdown signal, test time was about 10.000000 seconds 00:05:52.555 00:05:52.555 Latency(us) 00:05:52.555 [2024-11-26T18:12:26.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:52.555 [2024-11-26T18:12:26.420Z] =================================================================================================================== 00:05:52.555 [2024-11-26T18:12:26.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:05:52.555 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3517062 00:05:52.936 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:52.936 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:53.228 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:53.228 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:05:53.228 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:05:53.228 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:05:53.228 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3512365 00:05:53.228 19:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3512365 00:05:53.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3512365 Killed "${NVMF_APP[@]}" "$@" 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3519500 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3519500 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3519500 ']' 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:05:53.228 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:05:53.228 [2024-11-26 19:12:27.065691] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:05:53.228 [2024-11-26 19:12:27.065749] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:53.488 [2024-11-26 19:12:27.136216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.488 [2024-11-26 19:12:27.166208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:53.488 [2024-11-26 19:12:27.166234] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:53.488 [2024-11-26 19:12:27.166241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:53.488 [2024-11-26 19:12:27.166246] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:53.488 [2024-11-26 19:12:27.166250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:53.488 [2024-11-26 19:12:27.166707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.488 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.488 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:05:53.488 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:53.488 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:53.488 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:05:53.488 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:53.488 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:05:53.747 [2024-11-26 19:12:27.407904] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:05:53.747 [2024-11-26 19:12:27.407978] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:05:53.747 [2024-11-26 19:12:27.408000] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 0d812ec1-6a02-4215-98b5-1b5fcef89ae0 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0d812ec1-6a02-4215-98b5-1b5fcef89ae0 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:05:53.747 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d812ec1-6a02-4215-98b5-1b5fcef89ae0 -t 2000 00:05:54.006 [ 00:05:54.006 { 00:05:54.006 "name": "0d812ec1-6a02-4215-98b5-1b5fcef89ae0", 00:05:54.006 "aliases": [ 00:05:54.006 "lvs/lvol" 00:05:54.006 ], 00:05:54.006 "product_name": "Logical Volume", 00:05:54.006 "block_size": 4096, 00:05:54.006 "num_blocks": 38912, 00:05:54.006 "uuid": "0d812ec1-6a02-4215-98b5-1b5fcef89ae0", 00:05:54.006 "assigned_rate_limits": { 00:05:54.006 "rw_ios_per_sec": 0, 00:05:54.006 "rw_mbytes_per_sec": 0, 00:05:54.006 "r_mbytes_per_sec": 0, 00:05:54.006 "w_mbytes_per_sec": 0 00:05:54.006 }, 00:05:54.006 "claimed": false, 00:05:54.006 "zoned": false, 00:05:54.006 "supported_io_types": { 00:05:54.006 "read": true, 00:05:54.006 "write": true, 00:05:54.006 "unmap": true, 00:05:54.006 "flush": false, 00:05:54.006 "reset": true, 00:05:54.006 "nvme_admin": false, 00:05:54.006 "nvme_io": false, 00:05:54.006 "nvme_io_md": false, 00:05:54.006 "write_zeroes": true, 00:05:54.006 "zcopy": false, 00:05:54.006 "get_zone_info": false, 00:05:54.006 "zone_management": false, 00:05:54.006 "zone_append": false, 00:05:54.006 "compare": false, 00:05:54.006 "compare_and_write": false, 00:05:54.006 "abort": false, 00:05:54.006 "seek_hole": true, 00:05:54.006 "seek_data": true, 00:05:54.006 "copy": false, 00:05:54.006 "nvme_iov_md": false 00:05:54.006 }, 00:05:54.006 "driver_specific": { 00:05:54.006 "lvol": { 00:05:54.006 "lvol_store_uuid": "aa713f10-8b34-4f7b-8395-ab2299961c25", 00:05:54.006 "base_bdev": "aio_bdev", 00:05:54.006 "thin_provision": false, 00:05:54.007 "num_allocated_clusters": 38, 00:05:54.007 "snapshot": false, 00:05:54.007 "clone": false, 00:05:54.007 "esnap_clone": false 00:05:54.007 } 00:05:54.007 } 00:05:54.007 } 00:05:54.007 ] 00:05:54.007 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:05:54.007 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:54.007 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:05:54.265 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:05:54.265 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:54.265 19:12:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:05:54.265 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:05:54.265 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:05:54.525 [2024-11-26 19:12:28.172333] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:54.525 request: 00:05:54.525 { 00:05:54.525 "uuid": "aa713f10-8b34-4f7b-8395-ab2299961c25", 00:05:54.525 "method": "bdev_lvol_get_lvstores", 00:05:54.525 "req_id": 1 00:05:54.525 } 00:05:54.525 Got JSON-RPC error response 00:05:54.525 response: 00:05:54.525 { 00:05:54.525 "code": -19, 00:05:54.525 "message": "No such device" 00:05:54.525 } 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.525 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:05:54.785 aio_bdev 00:05:54.785 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0d812ec1-6a02-4215-98b5-1b5fcef89ae0 00:05:54.785 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=0d812ec1-6a02-4215-98b5-1b5fcef89ae0 00:05:54.785 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:05:54.785 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:05:54.785 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:05:54.785 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:05:54.785 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:05:55.044 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d812ec1-6a02-4215-98b5-1b5fcef89ae0 -t 2000 00:05:55.044 [ 00:05:55.044 { 00:05:55.044 "name": "0d812ec1-6a02-4215-98b5-1b5fcef89ae0", 00:05:55.044 "aliases": [ 00:05:55.044 "lvs/lvol" 00:05:55.044 ], 00:05:55.044 "product_name": "Logical Volume", 00:05:55.044 "block_size": 4096, 00:05:55.044 "num_blocks": 38912, 00:05:55.044 "uuid": "0d812ec1-6a02-4215-98b5-1b5fcef89ae0", 00:05:55.044 "assigned_rate_limits": { 00:05:55.044 "rw_ios_per_sec": 0, 00:05:55.044 "rw_mbytes_per_sec": 0, 00:05:55.044 "r_mbytes_per_sec": 0, 00:05:55.044 "w_mbytes_per_sec": 0 00:05:55.044 }, 00:05:55.044 "claimed": false, 00:05:55.044 "zoned": false, 00:05:55.044 "supported_io_types": { 00:05:55.044 "read": true, 00:05:55.044 "write": true, 00:05:55.044 "unmap": true, 00:05:55.044 "flush": false, 00:05:55.044 "reset": true, 00:05:55.044 "nvme_admin": false, 00:05:55.044 "nvme_io": false, 00:05:55.044 "nvme_io_md": false, 00:05:55.044 "write_zeroes": true, 00:05:55.044 "zcopy": false, 00:05:55.044 "get_zone_info": false, 00:05:55.044 "zone_management": false, 00:05:55.044 "zone_append": false, 00:05:55.044 "compare": false, 00:05:55.044 "compare_and_write": false, 00:05:55.044 "abort": false, 00:05:55.044 "seek_hole": true, 00:05:55.044 "seek_data": true, 00:05:55.044 "copy": false, 00:05:55.045 "nvme_iov_md": false 00:05:55.045 }, 00:05:55.045 "driver_specific": { 00:05:55.045 "lvol": { 00:05:55.045 "lvol_store_uuid": "aa713f10-8b34-4f7b-8395-ab2299961c25", 00:05:55.045 "base_bdev": "aio_bdev", 00:05:55.045 "thin_provision": false, 00:05:55.045 "num_allocated_clusters": 38, 00:05:55.045 "snapshot": false, 00:05:55.045 "clone": false, 00:05:55.045 "esnap_clone": false 00:05:55.045 } 00:05:55.045 } 00:05:55.045 } 00:05:55.045 ] 00:05:55.045 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:05:55.045 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:55.045 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:05:55.304 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:05:55.304 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:55.304 19:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:05:55.304 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:05:55.304 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d812ec1-6a02-4215-98b5-1b5fcef89ae0 00:05:55.563 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa713f10-8b34-4f7b-8395-ab2299961c25 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:05:55.822 00:05:55.822 real 0m15.659s 00:05:55.822 user 0m42.300s 00:05:55.822 sys 0m2.636s 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:05:55.822 ************************************ 00:05:55.822 END TEST lvs_grow_dirty 00:05:55.822 ************************************ 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:05:55.822 nvmf_trace.0 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:55.822 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:56.081 rmmod nvme_tcp 00:05:56.081 rmmod nvme_fabrics 00:05:56.081 rmmod nvme_keyring 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3519500 ']' 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3519500 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3519500 ']' 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3519500 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3519500 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3519500' 00:05:56.081 killing process with pid 3519500 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3519500 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3519500 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:56.081 19:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.616 19:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:58.616 00:05:58.616 real 0m39.348s 00:05:58.616 user 1m1.817s 00:05:58.616 sys 0m8.162s 00:05:58.616 19:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.616 19:12:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:05:58.616 ************************************ 00:05:58.616 END TEST nvmf_lvs_grow 00:05:58.616 ************************************ 00:05:58.616 19:12:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:05:58.616 19:12:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:58.616 19:12:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.616 19:12:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:58.616 ************************************ 00:05:58.616 START TEST nvmf_bdev_io_wait 00:05:58.616 ************************************ 00:05:58.616 19:12:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:05:58.616 * Looking for test storage... 00:05:58.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.616 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.617 --rc genhtml_branch_coverage=1 00:05:58.617 --rc genhtml_function_coverage=1 00:05:58.617 --rc genhtml_legend=1 00:05:58.617 --rc geninfo_all_blocks=1 00:05:58.617 --rc geninfo_unexecuted_blocks=1 00:05:58.617 00:05:58.617 ' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.617 --rc genhtml_branch_coverage=1 00:05:58.617 --rc genhtml_function_coverage=1 00:05:58.617 --rc genhtml_legend=1 00:05:58.617 --rc geninfo_all_blocks=1 00:05:58.617 --rc geninfo_unexecuted_blocks=1 00:05:58.617 00:05:58.617 ' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.617 --rc genhtml_branch_coverage=1 00:05:58.617 --rc genhtml_function_coverage=1 00:05:58.617 --rc genhtml_legend=1 00:05:58.617 --rc geninfo_all_blocks=1 00:05:58.617 --rc geninfo_unexecuted_blocks=1 00:05:58.617 00:05:58.617 ' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.617 --rc genhtml_branch_coverage=1 00:05:58.617 --rc genhtml_function_coverage=1 00:05:58.617 --rc genhtml_legend=1 00:05:58.617 --rc geninfo_all_blocks=1 00:05:58.617 --rc geninfo_unexecuted_blocks=1 00:05:58.617 00:05:58.617 ' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:05:58.617 19:12:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:03.892 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:03.892 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:03.892 Found net devices under 0000:31:00.0: cvl_0_0 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.892 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:03.893 Found net devices under 0000:31:00.1: cvl_0_1 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:03.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:06:03.893 00:06:03.893 --- 10.0.0.2 ping statistics --- 00:06:03.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.893 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:06:03.893 00:06:03.893 --- 10.0.0.1 ping statistics --- 00:06:03.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.893 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3524600 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3524600 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3524600 ']' 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:06:03.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.893 19:12:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:03.893 [2024-11-26 19:12:37.577633] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:03.893 [2024-11-26 19:12:37.577697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:03.893 [2024-11-26 19:12:37.670913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:03.893 [2024-11-26 19:12:37.725116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:03.893 [2024-11-26 19:12:37.725173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:03.893 [2024-11-26 19:12:37.725182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:03.893 [2024-11-26 19:12:37.725190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:03.893 [2024-11-26 19:12:37.725196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:03.893 [2024-11-26 19:12:37.727664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.893 [2024-11-26 19:12:37.727827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.893 [2024-11-26 19:12:37.727988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:03.893 [2024-11-26 19:12:37.727989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:04.832 [2024-11-26 19:12:38.487622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:04.832 Malloc0 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.832 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:04.833 [2024-11-26 19:12:38.536110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3524950 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3524951 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3524954 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:04.833 { 00:06:04.833 "params": { 00:06:04.833 "name": "Nvme$subsystem", 00:06:04.833 "trtype": "$TEST_TRANSPORT", 00:06:04.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:04.833 "adrfam": "ipv4", 00:06:04.833 "trsvcid": "$NVMF_PORT", 00:06:04.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:04.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:04.833 "hdgst": ${hdgst:-false}, 00:06:04.833 "ddgst": ${ddgst:-false} 00:06:04.833 }, 00:06:04.833 "method": "bdev_nvme_attach_controller" 00:06:04.833 } 00:06:04.833 EOF 00:06:04.833 )") 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3524955 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:04.833 { 00:06:04.833 "params": { 00:06:04.833 "name": "Nvme$subsystem", 00:06:04.833 "trtype": "$TEST_TRANSPORT", 00:06:04.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:04.833 "adrfam": "ipv4", 00:06:04.833 "trsvcid": "$NVMF_PORT", 00:06:04.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:04.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:04.833 "hdgst": ${hdgst:-false}, 00:06:04.833 "ddgst": ${ddgst:-false} 00:06:04.833 }, 00:06:04.833 "method": "bdev_nvme_attach_controller" 00:06:04.833 } 00:06:04.833 EOF 00:06:04.833 )") 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:04.833 { 00:06:04.833 "params": { 00:06:04.833 "name": "Nvme$subsystem", 00:06:04.833 "trtype": "$TEST_TRANSPORT", 00:06:04.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:04.833 "adrfam": "ipv4", 00:06:04.833 "trsvcid": "$NVMF_PORT", 00:06:04.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:04.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:04.833 "hdgst": ${hdgst:-false}, 00:06:04.833 "ddgst": ${ddgst:-false} 00:06:04.833 }, 00:06:04.833 "method": "bdev_nvme_attach_controller" 00:06:04.833 } 00:06:04.833 EOF 00:06:04.833 )") 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:04.833 { 00:06:04.833 "params": { 00:06:04.833 "name": "Nvme$subsystem", 00:06:04.833 "trtype": "$TEST_TRANSPORT", 00:06:04.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:04.833 "adrfam": "ipv4", 00:06:04.833 "trsvcid": "$NVMF_PORT", 00:06:04.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:04.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:04.833 "hdgst": ${hdgst:-false}, 00:06:04.833 "ddgst": ${ddgst:-false} 00:06:04.833 }, 00:06:04.833 "method": "bdev_nvme_attach_controller" 00:06:04.833 } 00:06:04.833 EOF 00:06:04.833 )") 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3524950 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:04.833 "params": { 00:06:04.833 "name": "Nvme1", 00:06:04.833 "trtype": "tcp", 00:06:04.833 "traddr": "10.0.0.2", 00:06:04.833 "adrfam": "ipv4", 00:06:04.833 "trsvcid": "4420", 00:06:04.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:04.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:04.833 "hdgst": false, 00:06:04.833 "ddgst": false 00:06:04.833 }, 00:06:04.833 "method": "bdev_nvme_attach_controller" 00:06:04.833 }' 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:04.833 "params": { 00:06:04.833 "name": "Nvme1", 00:06:04.833 "trtype": "tcp", 00:06:04.833 "traddr": "10.0.0.2", 00:06:04.833 "adrfam": "ipv4", 00:06:04.833 "trsvcid": "4420", 00:06:04.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:04.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:04.833 "hdgst": false, 00:06:04.833 "ddgst": false 00:06:04.833 }, 00:06:04.833 "method": "bdev_nvme_attach_controller" 00:06:04.833 }' 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:04.833 "params": { 00:06:04.833 "name": "Nvme1", 00:06:04.833 "trtype": "tcp", 00:06:04.833 "traddr": "10.0.0.2", 00:06:04.833 "adrfam": "ipv4", 00:06:04.833 "trsvcid": "4420", 00:06:04.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:04.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:04.833 "hdgst": false, 00:06:04.833 "ddgst": false 00:06:04.833 }, 00:06:04.833 "method": "bdev_nvme_attach_controller" 00:06:04.833 }' 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:06:04.833 19:12:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:04.833 "params": { 00:06:04.833 "name": "Nvme1", 00:06:04.833 "trtype": "tcp", 00:06:04.833 "traddr": "10.0.0.2", 00:06:04.833 "adrfam": "ipv4", 00:06:04.833 "trsvcid": "4420", 00:06:04.833 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:04.833 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:04.833 "hdgst": false, 00:06:04.833 "ddgst": false 00:06:04.833 }, 00:06:04.833 "method": "bdev_nvme_attach_controller" 00:06:04.833 }' 00:06:04.833 [2024-11-26 19:12:38.576690] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:04.833 [2024-11-26 19:12:38.576747] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:06:04.833 [2024-11-26 19:12:38.578708] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:04.833 [2024-11-26 19:12:38.578764] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:06:04.833 [2024-11-26 19:12:38.579916] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:04.834 [2024-11-26 19:12:38.579976] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:06:04.834 [2024-11-26 19:12:38.581287] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:04.834 [2024-11-26 19:12:38.581352] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:06:05.094 [2024-11-26 19:12:38.789819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.094 [2024-11-26 19:12:38.829229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:06:05.094 [2024-11-26 19:12:38.875141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.094 [2024-11-26 19:12:38.917606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.094 [2024-11-26 19:12:38.937889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.353 [2024-11-26 19:12:38.970664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:06:05.353 [2024-11-26 19:12:38.994452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.353 [2024-11-26 19:12:39.020942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:06:05.353 Running I/O for 1 seconds... 00:06:05.353 Running I/O for 1 seconds... 00:06:05.353 Running I/O for 1 seconds... 00:06:05.613 Running I/O for 1 seconds... 00:06:06.552 8623.00 IOPS, 33.68 MiB/s [2024-11-26T18:12:40.417Z] 181568.00 IOPS, 709.25 MiB/s 00:06:06.552 Latency(us) 00:06:06.552 [2024-11-26T18:12:40.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:06.552 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:06:06.552 Nvme1n1 : 1.00 181215.38 707.87 0.00 0.00 702.40 293.55 1952.43 00:06:06.552 [2024-11-26T18:12:40.417Z] =================================================================================================================== 00:06:06.552 [2024-11-26T18:12:40.417Z] Total : 181215.38 707.87 0.00 0.00 702.40 293.55 1952.43 00:06:06.552 00:06:06.552 Latency(us) 00:06:06.552 [2024-11-26T18:12:40.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:06.553 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:06:06.553 Nvme1n1 : 1.02 8607.26 33.62 0.00 0.00 14721.91 7099.73 24029.87 00:06:06.553 [2024-11-26T18:12:40.418Z] =================================================================================================================== 00:06:06.553 [2024-11-26T18:12:40.418Z] Total : 8607.26 33.62 0.00 0.00 14721.91 7099.73 24029.87 00:06:06.553 8471.00 IOPS, 33.09 MiB/s 00:06:06.553 Latency(us) 00:06:06.553 [2024-11-26T18:12:40.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:06.553 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:06:06.553 Nvme1n1 : 1.01 8581.35 33.52 0.00 0.00 14879.66 3904.85 32331.09 00:06:06.553 [2024-11-26T18:12:40.418Z] =================================================================================================================== 00:06:06.553 [2024-11-26T18:12:40.418Z] Total : 8581.35 33.52 0.00 0.00 14879.66 3904.85 32331.09 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3524951 00:06:06.553 12575.00 IOPS, 49.12 MiB/s 00:06:06.553 Latency(us) 00:06:06.553 [2024-11-26T18:12:40.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:06.553 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:06:06.553 Nvme1n1 : 1.01 12627.59 49.33 0.00 0.00 10102.67 4805.97 17257.81 00:06:06.553 [2024-11-26T18:12:40.418Z] =================================================================================================================== 00:06:06.553 [2024-11-26T18:12:40.418Z] Total : 12627.59 49.33 0.00 0.00 10102.67 4805.97 17257.81 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3524954 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3524955 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:06.553 rmmod nvme_tcp 00:06:06.553 rmmod nvme_fabrics 00:06:06.553 rmmod nvme_keyring 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3524600 ']' 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3524600 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3524600 ']' 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3524600 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.553 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3524600 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3524600' 00:06:06.812 killing process with pid 3524600 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3524600 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3524600 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.812 19:12:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:09.348 00:06:09.348 real 0m10.644s 00:06:09.348 user 0m17.802s 00:06:09.348 sys 0m5.684s 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:06:09.348 ************************************ 00:06:09.348 END TEST nvmf_bdev_io_wait 00:06:09.348 ************************************ 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:09.348 ************************************ 00:06:09.348 START TEST nvmf_queue_depth 00:06:09.348 ************************************ 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:06:09.348 * Looking for test storage... 00:06:09.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:09.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.348 --rc genhtml_branch_coverage=1 00:06:09.348 --rc genhtml_function_coverage=1 00:06:09.348 --rc genhtml_legend=1 00:06:09.348 --rc geninfo_all_blocks=1 00:06:09.348 --rc geninfo_unexecuted_blocks=1 00:06:09.348 00:06:09.348 ' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:09.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.348 --rc genhtml_branch_coverage=1 00:06:09.348 --rc genhtml_function_coverage=1 00:06:09.348 --rc genhtml_legend=1 00:06:09.348 --rc geninfo_all_blocks=1 00:06:09.348 --rc geninfo_unexecuted_blocks=1 00:06:09.348 00:06:09.348 ' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:09.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.348 --rc genhtml_branch_coverage=1 00:06:09.348 --rc genhtml_function_coverage=1 00:06:09.348 --rc genhtml_legend=1 00:06:09.348 --rc geninfo_all_blocks=1 00:06:09.348 --rc geninfo_unexecuted_blocks=1 00:06:09.348 00:06:09.348 ' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:09.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.348 --rc genhtml_branch_coverage=1 00:06:09.348 --rc genhtml_function_coverage=1 00:06:09.348 --rc genhtml_legend=1 00:06:09.348 --rc geninfo_all_blocks=1 00:06:09.348 --rc geninfo_unexecuted_blocks=1 00:06:09.348 00:06:09.348 ' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:09.348 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.349 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.349 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.349 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:09.349 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:09.349 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:06:09.349 19:12:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:14.626 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:14.626 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:14.626 Found net devices under 0000:31:00.0: cvl_0_0 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:14.626 Found net devices under 0000:31:00.1: cvl_0_1 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.626 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:14.886 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:14.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:06:14.886 00:06:14.886 --- 10.0.0.2 ping statistics --- 00:06:14.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.886 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:06:14.886 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:06:14.886 00:06:14.886 --- 10.0.0.1 ping statistics --- 00:06:14.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.887 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3529676 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3529676 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3529676 ']' 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.887 19:12:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:14.887 [2024-11-26 19:12:48.578628] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:14.887 [2024-11-26 19:12:48.578695] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.887 [2024-11-26 19:12:48.673855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.887 [2024-11-26 19:12:48.725281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.887 [2024-11-26 19:12:48.725334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.887 [2024-11-26 19:12:48.725342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.887 [2024-11-26 19:12:48.725350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.887 [2024-11-26 19:12:48.725356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.887 [2024-11-26 19:12:48.726155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 [2024-11-26 19:12:49.421461] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 Malloc0 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 [2024-11-26 19:12:49.466726] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3530009 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3530009 /var/tmp/bdevperf.sock 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3530009 ']' 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:15.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:15.825 19:12:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:06:15.825 [2024-11-26 19:12:49.509245] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:15.826 [2024-11-26 19:12:49.509307] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3530009 ] 00:06:15.826 [2024-11-26 19:12:49.593079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.826 [2024-11-26 19:12:49.646094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.764 19:12:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.764 19:12:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:06:16.764 19:12:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:06:16.764 19:12:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.764 19:12:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:16.764 NVMe0n1 00:06:16.764 19:12:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.764 19:12:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:16.764 Running I/O for 10 seconds... 00:06:19.079 11264.00 IOPS, 44.00 MiB/s [2024-11-26T18:12:53.883Z] 11776.00 IOPS, 46.00 MiB/s [2024-11-26T18:12:54.824Z] 12288.00 IOPS, 48.00 MiB/s [2024-11-26T18:12:55.765Z] 12534.50 IOPS, 48.96 MiB/s [2024-11-26T18:12:56.702Z] 12687.80 IOPS, 49.56 MiB/s [2024-11-26T18:12:57.639Z] 12795.33 IOPS, 49.98 MiB/s [2024-11-26T18:12:59.020Z] 12874.00 IOPS, 50.29 MiB/s [2024-11-26T18:12:59.958Z] 12985.00 IOPS, 50.72 MiB/s [2024-11-26T18:13:00.897Z] 13070.56 IOPS, 51.06 MiB/s [2024-11-26T18:13:00.897Z] 13117.00 IOPS, 51.24 MiB/s 00:06:27.032 Latency(us) 00:06:27.032 [2024-11-26T18:13:00.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.032 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:06:27.032 Verification LBA range: start 0x0 length 0x4000 00:06:27.032 NVMe0n1 : 10.05 13158.37 51.40 0.00 0.00 77550.49 7263.57 56797.87 00:06:27.032 [2024-11-26T18:13:00.897Z] =================================================================================================================== 00:06:27.032 [2024-11-26T18:13:00.897Z] Total : 13158.37 51.40 0.00 0.00 77550.49 7263.57 56797.87 00:06:27.032 { 00:06:27.032 "results": [ 00:06:27.032 { 00:06:27.032 "job": "NVMe0n1", 00:06:27.032 "core_mask": "0x1", 00:06:27.032 "workload": "verify", 00:06:27.032 "status": "finished", 00:06:27.032 "verify_range": { 00:06:27.032 "start": 0, 00:06:27.032 "length": 16384 00:06:27.032 }, 00:06:27.032 "queue_depth": 1024, 00:06:27.032 "io_size": 4096, 00:06:27.032 "runtime": 10.045316, 00:06:27.032 "iops": 13158.371523603637, 00:06:27.032 "mibps": 51.39988876407671, 00:06:27.032 "io_failed": 0, 00:06:27.032 "io_timeout": 0, 00:06:27.032 "avg_latency_us": 77550.48999767992, 00:06:27.032 "min_latency_us": 7263.573333333334, 00:06:27.032 "max_latency_us": 56797.86666666667 00:06:27.032 } 00:06:27.032 ], 00:06:27.032 "core_count": 1 00:06:27.032 } 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3530009 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3530009 ']' 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3530009 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3530009 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3530009' 00:06:27.032 killing process with pid 3530009 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3530009 00:06:27.032 Received shutdown signal, test time was about 10.000000 seconds 00:06:27.032 00:06:27.032 Latency(us) 00:06:27.032 [2024-11-26T18:13:00.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:27.032 [2024-11-26T18:13:00.897Z] =================================================================================================================== 00:06:27.032 [2024-11-26T18:13:00.897Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3530009 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:27.032 rmmod nvme_tcp 00:06:27.032 rmmod nvme_fabrics 00:06:27.032 rmmod nvme_keyring 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3529676 ']' 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3529676 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3529676 ']' 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3529676 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.032 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3529676 00:06:27.291 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:27.291 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:27.291 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3529676' 00:06:27.291 killing process with pid 3529676 00:06:27.291 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3529676 00:06:27.291 19:13:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3529676 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:27.291 19:13:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:29.829 00:06:29.829 real 0m20.425s 00:06:29.829 user 0m24.868s 00:06:29.829 sys 0m5.635s 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:06:29.829 ************************************ 00:06:29.829 END TEST nvmf_queue_depth 00:06:29.829 ************************************ 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:29.829 ************************************ 00:06:29.829 START TEST nvmf_target_multipath 00:06:29.829 ************************************ 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:06:29.829 * Looking for test storage... 00:06:29.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:29.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.829 --rc genhtml_branch_coverage=1 00:06:29.829 --rc genhtml_function_coverage=1 00:06:29.829 --rc genhtml_legend=1 00:06:29.829 --rc geninfo_all_blocks=1 00:06:29.829 --rc geninfo_unexecuted_blocks=1 00:06:29.829 00:06:29.829 ' 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:29.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.829 --rc genhtml_branch_coverage=1 00:06:29.829 --rc genhtml_function_coverage=1 00:06:29.829 --rc genhtml_legend=1 00:06:29.829 --rc geninfo_all_blocks=1 00:06:29.829 --rc geninfo_unexecuted_blocks=1 00:06:29.829 00:06:29.829 ' 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:29.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.829 --rc genhtml_branch_coverage=1 00:06:29.829 --rc genhtml_function_coverage=1 00:06:29.829 --rc genhtml_legend=1 00:06:29.829 --rc geninfo_all_blocks=1 00:06:29.829 --rc geninfo_unexecuted_blocks=1 00:06:29.829 00:06:29.829 ' 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:29.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.829 --rc genhtml_branch_coverage=1 00:06:29.829 --rc genhtml_function_coverage=1 00:06:29.829 --rc genhtml_legend=1 00:06:29.829 --rc geninfo_all_blocks=1 00:06:29.829 --rc geninfo_unexecuted_blocks=1 00:06:29.829 00:06:29.829 ' 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:29.829 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:29.830 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:06:29.830 19:13:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:35.106 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:35.106 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:35.106 Found net devices under 0000:31:00.0: cvl_0_0 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.106 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:35.107 Found net devices under 0000:31:00.1: cvl_0_1 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:06:35.107 00:06:35.107 --- 10.0.0.2 ping statistics --- 00:06:35.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.107 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:06:35.107 00:06:35.107 --- 10.0.0.1 ping statistics --- 00:06:35.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.107 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:06:35.107 only one NIC for nvmf test 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:35.107 rmmod nvme_tcp 00:06:35.107 rmmod nvme_fabrics 00:06:35.107 rmmod nvme_keyring 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.107 19:13:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.015 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.015 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:37.016 00:06:37.016 real 0m7.727s 00:06:37.016 user 0m1.428s 00:06:37.016 sys 0m4.157s 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.016 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:06:37.016 ************************************ 00:06:37.016 END TEST nvmf_target_multipath 00:06:37.016 ************************************ 00:06:37.276 19:13:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:06:37.276 19:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.276 19:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.276 19:13:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.276 ************************************ 00:06:37.276 START TEST nvmf_zcopy 00:06:37.276 ************************************ 00:06:37.276 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:06:37.276 * Looking for test storage... 00:06:37.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.276 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.276 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.276 19:13:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.276 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.277 --rc genhtml_branch_coverage=1 00:06:37.277 --rc genhtml_function_coverage=1 00:06:37.277 --rc genhtml_legend=1 00:06:37.277 --rc geninfo_all_blocks=1 00:06:37.277 --rc geninfo_unexecuted_blocks=1 00:06:37.277 00:06:37.277 ' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.277 --rc genhtml_branch_coverage=1 00:06:37.277 --rc genhtml_function_coverage=1 00:06:37.277 --rc genhtml_legend=1 00:06:37.277 --rc geninfo_all_blocks=1 00:06:37.277 --rc geninfo_unexecuted_blocks=1 00:06:37.277 00:06:37.277 ' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.277 --rc genhtml_branch_coverage=1 00:06:37.277 --rc genhtml_function_coverage=1 00:06:37.277 --rc genhtml_legend=1 00:06:37.277 --rc geninfo_all_blocks=1 00:06:37.277 --rc geninfo_unexecuted_blocks=1 00:06:37.277 00:06:37.277 ' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.277 --rc genhtml_branch_coverage=1 00:06:37.277 --rc genhtml_function_coverage=1 00:06:37.277 --rc genhtml_legend=1 00:06:37.277 --rc geninfo_all_blocks=1 00:06:37.277 --rc geninfo_unexecuted_blocks=1 00:06:37.277 00:06:37.277 ' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:37.277 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:37.278 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:06:37.278 19:13:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:43.851 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:43.851 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:43.851 Found net devices under 0000:31:00.0: cvl_0_0 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.851 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:43.852 Found net devices under 0000:31:00.1: cvl_0_1 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:43.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:06:43.852 00:06:43.852 --- 10.0.0.2 ping statistics --- 00:06:43.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.852 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:06:43.852 00:06:43.852 --- 10.0.0.1 ping statistics --- 00:06:43.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.852 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3541380 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3541380 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3541380 ']' 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:06:43.852 19:13:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:43.852 [2024-11-26 19:13:16.988544] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:43.852 [2024-11-26 19:13:16.988605] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.852 [2024-11-26 19:13:17.081007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.852 [2024-11-26 19:13:17.132340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.852 [2024-11-26 19:13:17.132395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.852 [2024-11-26 19:13:17.132404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.852 [2024-11-26 19:13:17.132411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.852 [2024-11-26 19:13:17.132418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.852 [2024-11-26 19:13:17.133235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 [2024-11-26 19:13:17.832434] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 [2024-11-26 19:13:17.848725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 malloc0 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:44.112 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:44.112 { 00:06:44.112 "params": { 00:06:44.112 "name": "Nvme$subsystem", 00:06:44.112 "trtype": "$TEST_TRANSPORT", 00:06:44.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:44.112 "adrfam": "ipv4", 00:06:44.112 "trsvcid": "$NVMF_PORT", 00:06:44.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:44.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:44.112 "hdgst": ${hdgst:-false}, 00:06:44.113 "ddgst": ${ddgst:-false} 00:06:44.113 }, 00:06:44.113 "method": "bdev_nvme_attach_controller" 00:06:44.113 } 00:06:44.113 EOF 00:06:44.113 )") 00:06:44.113 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:06:44.113 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:06:44.113 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:06:44.113 19:13:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:44.113 "params": { 00:06:44.113 "name": "Nvme1", 00:06:44.113 "trtype": "tcp", 00:06:44.113 "traddr": "10.0.0.2", 00:06:44.113 "adrfam": "ipv4", 00:06:44.113 "trsvcid": "4420", 00:06:44.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:44.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:44.113 "hdgst": false, 00:06:44.113 "ddgst": false 00:06:44.113 }, 00:06:44.113 "method": "bdev_nvme_attach_controller" 00:06:44.113 }' 00:06:44.113 [2024-11-26 19:13:17.921221] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:44.113 [2024-11-26 19:13:17.921282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3541416 ] 00:06:44.372 [2024-11-26 19:13:18.005707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.372 [2024-11-26 19:13:18.058890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.632 Running I/O for 10 seconds... 00:06:46.957 9791.00 IOPS, 76.49 MiB/s [2024-11-26T18:13:21.764Z] 9855.00 IOPS, 76.99 MiB/s [2024-11-26T18:13:22.710Z] 9879.67 IOPS, 77.18 MiB/s [2024-11-26T18:13:23.741Z] 9898.25 IOPS, 77.33 MiB/s [2024-11-26T18:13:24.702Z] 9912.80 IOPS, 77.44 MiB/s [2024-11-26T18:13:25.641Z] 9923.67 IOPS, 77.53 MiB/s [2024-11-26T18:13:26.582Z] 9929.14 IOPS, 77.57 MiB/s [2024-11-26T18:13:27.524Z] 9934.00 IOPS, 77.61 MiB/s [2024-11-26T18:13:28.465Z] 9938.44 IOPS, 77.64 MiB/s [2024-11-26T18:13:28.465Z] 9941.20 IOPS, 77.67 MiB/s 00:06:54.600 Latency(us) 00:06:54.600 [2024-11-26T18:13:28.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.600 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:06:54.600 Verification LBA range: start 0x0 length 0x1000 00:06:54.600 Nvme1n1 : 10.01 9945.75 77.70 0.00 0.00 12828.61 2239.15 20753.07 00:06:54.600 [2024-11-26T18:13:28.465Z] =================================================================================================================== 00:06:54.600 [2024-11-26T18:13:28.465Z] Total : 9945.75 77.70 0.00 0.00 12828.61 2239.15 20753.07 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3543763 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:54.860 { 00:06:54.860 "params": { 00:06:54.860 "name": "Nvme$subsystem", 00:06:54.860 "trtype": "$TEST_TRANSPORT", 00:06:54.860 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:54.860 "adrfam": "ipv4", 00:06:54.860 "trsvcid": "$NVMF_PORT", 00:06:54.860 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:54.860 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:54.860 "hdgst": ${hdgst:-false}, 00:06:54.860 "ddgst": ${ddgst:-false} 00:06:54.860 }, 00:06:54.860 "method": "bdev_nvme_attach_controller" 00:06:54.860 } 00:06:54.860 EOF 00:06:54.860 )") 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:06:54.860 [2024-11-26 19:13:28.530011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.530044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:06:54.860 19:13:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:54.860 "params": { 00:06:54.860 "name": "Nvme1", 00:06:54.860 "trtype": "tcp", 00:06:54.860 "traddr": "10.0.0.2", 00:06:54.860 "adrfam": "ipv4", 00:06:54.860 "trsvcid": "4420", 00:06:54.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:06:54.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:06:54.860 "hdgst": false, 00:06:54.860 "ddgst": false 00:06:54.860 }, 00:06:54.860 "method": "bdev_nvme_attach_controller" 00:06:54.860 }' 00:06:54.860 [2024-11-26 19:13:28.537995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.538005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.546014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.546023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.554034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.554042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.556247] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:06:54.860 [2024-11-26 19:13:28.556294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3543763 ] 00:06:54.860 [2024-11-26 19:13:28.562055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.562064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.570075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.570083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.578096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.578107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.586119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.586127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.594141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.594149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.602157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.602166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.610178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.610188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.618199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.618208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.621585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.860 [2024-11-26 19:13:28.626221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.626230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.634240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.634250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.642260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.642269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.650281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.860 [2024-11-26 19:13:28.650292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.860 [2024-11-26 19:13:28.651426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.860 [2024-11-26 19:13:28.658301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.658310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.861 [2024-11-26 19:13:28.666327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.666338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.861 [2024-11-26 19:13:28.674345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.674356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.861 [2024-11-26 19:13:28.682364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.682375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.861 [2024-11-26 19:13:28.690383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.690393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.861 [2024-11-26 19:13:28.698403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.698412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.861 [2024-11-26 19:13:28.706422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.706430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.861 [2024-11-26 19:13:28.714441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.714450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:54.861 [2024-11-26 19:13:28.722745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:54.861 [2024-11-26 19:13:28.722762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.730759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.730772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.738776] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.738786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.746796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.746806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.754817] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.754828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.762839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.762849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.770859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.770867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.778879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.778888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.786900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.786908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.794922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.794931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.802944] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.802953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.810966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.810975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.818986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.818994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.827007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.827015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.835028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.835036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.843048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.843056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.851070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.851080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.859091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.859106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.867122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.867138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.875136] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.875144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 Running I/O for 5 seconds... 00:06:55.120 [2024-11-26 19:13:28.883161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.883173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.893838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.893854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.902585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.902602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.911397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.911413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.920065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.920081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.929403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.929421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.938014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.938030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.946272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.946288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.955028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.955045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.963830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.963846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.972693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.972709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.120 [2024-11-26 19:13:28.981400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.120 [2024-11-26 19:13:28.981416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:28.990478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:28.990493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:28.998888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:28.998904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.007754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.007769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.016811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.016827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.025761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.025781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.034775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.034791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.043310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.043326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.052593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.052608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.061095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.061115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.070132] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.070148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.079242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.079258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.088444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.088460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.097528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.097543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.105961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.105976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.114607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.114622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.123584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.123600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.132694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.132709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.141587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.141602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.150524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.150539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.159757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.159772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.168819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.168833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.177783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.177798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.186634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.186649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.195405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.195424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.204347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.204362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.213262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.213277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.222414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.222430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.231291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.231306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.380 [2024-11-26 19:13:29.239602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.380 [2024-11-26 19:13:29.239617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.248517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.248532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.257182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.257197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.266167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.266182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.274857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.274872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.283170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.283185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.291815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.291830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.300276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.300292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.308865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.308879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.317802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.317817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.326169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.326184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.334986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.335000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.344119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.344135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.353248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.353264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.361811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.361833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.370912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.370928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.378709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.378725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.388026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.388042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.397186] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.397201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.406187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.406202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.415177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.415192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.424220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.424234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.432769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.432784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.441620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.441635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.450745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.450760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.459827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.459842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.468694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.468709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.477710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.477725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.642 [2024-11-26 19:13:29.486820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.642 [2024-11-26 19:13:29.486835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.643 [2024-11-26 19:13:29.495930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.643 [2024-11-26 19:13:29.495946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.643 [2024-11-26 19:13:29.504598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.643 [2024-11-26 19:13:29.504614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.902 [2024-11-26 19:13:29.513588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.902 [2024-11-26 19:13:29.513603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.902 [2024-11-26 19:13:29.521952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.902 [2024-11-26 19:13:29.521967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.902 [2024-11-26 19:13:29.530386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.530405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.539651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.539666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.548028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.548043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.557195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.557210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.565808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.565823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.574684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.574699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.582871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.582886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.592171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.592186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.600985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.601000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.609998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.610013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.618966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.618982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.628086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.628107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.636616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.636630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.645644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.645660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.654197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.654212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.663004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.663019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.671603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.671619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.680708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.680723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.689188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.689203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.698393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.698408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.707481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.707496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.716285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.716299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.725157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.725172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.734161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.734176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.743135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.743150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.752008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.752023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:55.903 [2024-11-26 19:13:29.760485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:55.903 [2024-11-26 19:13:29.760500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.769721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.769737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.778761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.778776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.787304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.787320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.796188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.796203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.804956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.804971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.813706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.813721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.822682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.822698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.831516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.831532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.840597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.840613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.164 [2024-11-26 19:13:29.849239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.164 [2024-11-26 19:13:29.849255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.858263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.858278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.866582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.866597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.874948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.874963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 19259.00 IOPS, 150.46 MiB/s [2024-11-26T18:13:30.030Z] [2024-11-26 19:13:29.883355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.883370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.891968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.891982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.900759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.900774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.909489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.909504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.918722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.918737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.927613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.927629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.936551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.936567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.945652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.945667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.954742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.954757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.963814] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.963829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.972036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.972051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.980772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.980787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.989971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.989986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:29.999142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:29.999158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:30.007236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:30.007251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:30.015982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:30.015997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.165 [2024-11-26 19:13:30.025154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.165 [2024-11-26 19:13:30.025170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.034077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.034093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.042452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.042468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.051142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.051157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.059956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.059971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.068935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.068950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.077379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.077395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.086689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.086704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.095268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.095284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.104510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.104525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.113184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.113200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.122336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.122352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.130810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.130827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.139257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.139273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.147863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.147878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.156840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.156855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.166109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.166125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.174569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.174584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.183758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.183773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.192187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.192205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.201449] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.201465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.210560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.210576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.219294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.219310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.228416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.228432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.236746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.236762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.245832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.245847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.254366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.254382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.263530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.263546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.271969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.271985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.426 [2024-11-26 19:13:30.280955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.426 [2024-11-26 19:13:30.280970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.290272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.290288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.299276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.299292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.308335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.308351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.316638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.316654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.325248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.325264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.333700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.333716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.342520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.342535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.351315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.351331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.359890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.359909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.368969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.368984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.377759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.377775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.386517] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.386533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.395496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.686 [2024-11-26 19:13:30.395512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.686 [2024-11-26 19:13:30.404638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.404654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.413778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.413794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.422791] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.422807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.431159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.431174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.439798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.439814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.448396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.448412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.457366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.457381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.466202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.466217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.475161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.475177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.484172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.484188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.493121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.493136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.502319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.502334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.511357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.511373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.520200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.520215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.528697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.528716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.537822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.537838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.687 [2024-11-26 19:13:30.546195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.687 [2024-11-26 19:13:30.546211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.555530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.555545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.564625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.564640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.573082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.573097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.582370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.582385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.590808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.590823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.599078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.599093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.608270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.608286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.616740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.616755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.625427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.625442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.634305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.634320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.642738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.642752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.651692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.651707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.660689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.660704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.669195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.669210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.677966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.677981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.686934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.686949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.695826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.695845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.704811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.704826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.713797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.713812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.722588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.948 [2024-11-26 19:13:30.722602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.948 [2024-11-26 19:13:30.731079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.731093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.739460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.739475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.748460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.748475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.757638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.757653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.766568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.766583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.775552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.775567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.784435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.784450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.793521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.793536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.802511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.802526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:56.949 [2024-11-26 19:13:30.811275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:56.949 [2024-11-26 19:13:30.811291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.820042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.820056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.829082] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.829097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.837849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.837864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.846702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.846717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.855411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.855426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.864447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.864463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.873709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.873725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 19394.50 IOPS, 151.52 MiB/s [2024-11-26T18:13:31.076Z] [2024-11-26 19:13:30.882489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.882504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.890880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.890896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.899934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.899949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.908439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.908454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.917007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.917022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.925716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.925731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.935076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.935091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.944162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.944177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.953040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.953055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.961584] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.961599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.970425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.970440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.979201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.979217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.988470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.988485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:30.997075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:30.997090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:31.005601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:31.005616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:31.014293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:31.014308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:31.022947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:31.022962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:31.031877] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:31.031892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:31.040391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:31.040405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:31.049351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:31.049366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:31.058526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:31.058542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.211 [2024-11-26 19:13:31.067363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.211 [2024-11-26 19:13:31.067377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.076468] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.076483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.084959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.084974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.093615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.093629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.102632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.102647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.111189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.111204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.120363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.120378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.128908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.128925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.137992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.138007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.146582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.146597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.155304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.155319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.164311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.164326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.173149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.173165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.182181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.182196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.190803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.190819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.199608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.199624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.208480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.208495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.217551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.217567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.226183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.226198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.235183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.235198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.243778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.243793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.252636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.252651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.261271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.261286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.269867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.269882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.279070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.279084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.288053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.288069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.296713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.296728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.473 [2024-11-26 19:13:31.305549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.473 [2024-11-26 19:13:31.305564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.474 [2024-11-26 19:13:31.314477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.474 [2024-11-26 19:13:31.314492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.474 [2024-11-26 19:13:31.323435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.474 [2024-11-26 19:13:31.323450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.474 [2024-11-26 19:13:31.332209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.474 [2024-11-26 19:13:31.332224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.341345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.341360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.350325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.350340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.359515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.359534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.368046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.368061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.377139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.377154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.386079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.386094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.395138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.395154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.403619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.403635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.412318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.412333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.420950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.420965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.429712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.429727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.438772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.438787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.447138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.447153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.455942] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.455957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.464287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.464302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.472938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.472953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.481478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.481493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.490141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.490157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.499373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.499388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.507797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.507812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.516873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.516888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.525965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.525984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.534324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.534339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.543289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.543304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.552122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.552138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.561147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.561162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.569586] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.569601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.578428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.735 [2024-11-26 19:13:31.578443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.735 [2024-11-26 19:13:31.587491] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.736 [2024-11-26 19:13:31.587506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.736 [2024-11-26 19:13:31.596526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.736 [2024-11-26 19:13:31.596541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.605540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.605555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.613924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.613939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.622926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.622944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.631912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.631927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.640714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.640730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.649714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.649729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.658155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.658171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.666459] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.666474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.675192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.675208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.684367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.684383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.692868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.692888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.701909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.701925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.710744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.710760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.719707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.719722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.728749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.728765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.737816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.737831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.746715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.746730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.755143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.755160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.763945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.763961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.772908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.772923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.781862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.781877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.790759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.790775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.799604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.799619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.808500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.808515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.817192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.817207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.825891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.825906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.996 [2024-11-26 19:13:31.834967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.996 [2024-11-26 19:13:31.834983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.997 [2024-11-26 19:13:31.843921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.997 [2024-11-26 19:13:31.843937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:57.997 [2024-11-26 19:13:31.853094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:57.997 [2024-11-26 19:13:31.853114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.861695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.861717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.870750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.870766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.879512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.879529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 19441.00 IOPS, 151.88 MiB/s [2024-11-26T18:13:32.122Z] [2024-11-26 19:13:31.888016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.888031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.896792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.896807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.905384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.905400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.914157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.914172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.923146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.923162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.931614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.257 [2024-11-26 19:13:31.931629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.257 [2024-11-26 19:13:31.940405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:31.940420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:31.949336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:31.949352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:31.958282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:31.958298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:31.967064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:31.967080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:31.976157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:31.976173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:31.985078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:31.985093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:31.994097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:31.994117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.003033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.003048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.011660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.011676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.020699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.020715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.029700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.029716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.038197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.038212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.046719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.046735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.055339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.055355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.064277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.064293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.073268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.073284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.082206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.082222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.090712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.090728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.099423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.099439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.108305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.108321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.258 [2024-11-26 19:13:32.117174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.258 [2024-11-26 19:13:32.117190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.126402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.126417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.135383] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.135398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.144379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.144394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.153319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.153334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.162253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.162268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.170978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.170993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.179955] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.179970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.188626] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.188641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.197421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.197437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.206402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.544 [2024-11-26 19:13:32.206418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.544 [2024-11-26 19:13:32.215166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.215181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.224193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.224208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.232711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.232726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.242002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.242018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.250405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.250420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.259486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.259501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.267792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.267807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.276203] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.276218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.285245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.285260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.294276] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.294291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.302657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.302672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.311582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.311597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.320329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.320343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.329036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.329052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.338045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.338060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.347000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.347015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.355932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.355951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.365220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.365235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.374192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.374207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.383050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.383065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.391786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.391801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.545 [2024-11-26 19:13:32.400601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.545 [2024-11-26 19:13:32.400616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.409597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.409612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.418421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.418437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.427560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.427576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.436054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.436069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.444853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.444868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.453929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.453944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.461773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.461788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.470990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.471004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.480053] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.480068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.488521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.488537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.497183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.497198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.505977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.505993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.515164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.515179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.524178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.524197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.533201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.533216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.542260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.542276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.551135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.551150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.559500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.559516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.568634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.568648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.577700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.577715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.586799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.586814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.595030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.595045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.603840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.603855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.612844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.612859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.621503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.621517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.630213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.630228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.638915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.638930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.648116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.648131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.656503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.806 [2024-11-26 19:13:32.656519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:58.806 [2024-11-26 19:13:32.665576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:58.807 [2024-11-26 19:13:32.665591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.674755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.674770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.683688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.683703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.692501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.692520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.701157] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.701173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.710090] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.710110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.718618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.718632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.727546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.727561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.736802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.736817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.745848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.745863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.754815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.754830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.763576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.763591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.772587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.772603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.781545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.781560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.790114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.790129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.798985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.799000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.808003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.808018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.817093] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.817112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.830950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.830965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.838831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.838846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.847510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.847525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.856516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.856532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.864881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.864899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.873601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.873616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.882197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.882213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 19479.25 IOPS, 152.18 MiB/s [2024-11-26T18:13:32.932Z] [2024-11-26 19:13:32.890876] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.890890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.899505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.899520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.908503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.908518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.917233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.917249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.067 [2024-11-26 19:13:32.926411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.067 [2024-11-26 19:13:32.926426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:32.934880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:32.934895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:32.944063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:32.944078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:32.953038] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:32.953054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:32.962007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:32.962023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:32.970625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:32.970640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:32.979439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:32.979454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:32.988536] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:32.988552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:32.997188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:32.997203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:33.006228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:33.006244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:33.015216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:33.015231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:33.024175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:33.024190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:33.032717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:33.032732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:33.041919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:33.041934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.328 [2024-11-26 19:13:33.050984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.328 [2024-11-26 19:13:33.050999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.059948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.059963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.068954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.068969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.078125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.078140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.087035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.087050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.096007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.096022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.104311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.104326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.112474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.112489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.121135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.121150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.130111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.130126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.139275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.139290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.148427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.148443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.157452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.157468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.166113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.166129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.175086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.175107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.183723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.183739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.329 [2024-11-26 19:13:33.192062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.329 [2024-11-26 19:13:33.192077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.201045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.201061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.210133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.210149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.218677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.218693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.227158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.227173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.236448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.236464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.244996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.245013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.253720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.253735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.262506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.262521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.271391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.271406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.279990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.280005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.289097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.289118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.298219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.298236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.306611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.306627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.315745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.315761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.323972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.323987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.332219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.332234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.340901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.340917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.349885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.349900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.358837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.358853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.367835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.367851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.376442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.376457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.385130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.385146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.394019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.394035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.403164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.403179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.412253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.412268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.421450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.421465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.430524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.430540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.439396] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.439412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.590 [2024-11-26 19:13:33.448351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.590 [2024-11-26 19:13:33.448366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.457352] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.457368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.466379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.466395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.475048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.475064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.484030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.484046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.493009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.493025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.501996] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.502011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.511023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.511039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.520086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.520107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.529087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.529112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.537993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.538009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.546964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.546980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.556018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.556034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.564988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.565003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.574266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.574281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.582696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.582712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.591890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.591906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.600154] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.600170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.608604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.608620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.617073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.617088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.625784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.625799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.634457] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.634472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.643139] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.643154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.652111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.898 [2024-11-26 19:13:33.652127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.898 [2024-11-26 19:13:33.661080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.661095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.670322] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.670337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.679225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.679239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.688426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.688442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.697299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.697317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.705878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.705893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.714614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.714629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.723771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.723786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.732643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.732658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.741111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.741126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.749759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.749775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:06:59.899 [2024-11-26 19:13:33.758797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:06:59.899 [2024-11-26 19:13:33.758813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.767870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.767886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.776187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.776202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.785043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.785058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.793606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.793621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.802140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.802155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.811270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.811286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.819796] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.819811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.828926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.828941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.837591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.837606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.846339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.846354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.855561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.855576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.863898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.863918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.873061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.873077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.881447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.881462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 19502.00 IOPS, 152.36 MiB/s [2024-11-26T18:13:34.024Z] [2024-11-26 19:13:33.889859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.889874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 00:07:00.159 Latency(us) 00:07:00.159 [2024-11-26T18:13:34.024Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.159 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:07:00.159 Nvme1n1 : 5.01 19503.10 152.37 0.00 0.00 6557.71 3003.73 16602.45 00:07:00.159 [2024-11-26T18:13:34.024Z] =================================================================================================================== 00:07:00.159 [2024-11-26T18:13:34.024Z] Total : 19503.10 152.37 0.00 0.00 6557.71 3003.73 16602.45 00:07:00.159 [2024-11-26 19:13:33.895884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.895898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.903920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.903931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.911924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.911934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.919948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.919959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.927965] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.927974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.935986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.935996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.944007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.944015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.952026] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.952034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.960046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.960054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.968067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.159 [2024-11-26 19:13:33.968075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.159 [2024-11-26 19:13:33.976088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.160 [2024-11-26 19:13:33.976096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.160 [2024-11-26 19:13:33.984112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.160 [2024-11-26 19:13:33.984122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.160 [2024-11-26 19:13:33.992130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:07:00.160 [2024-11-26 19:13:33.992138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:00.160 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3543763) - No such process 00:07:00.160 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3543763 00:07:00.160 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.160 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.160 19:13:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:00.160 delay0 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.160 19:13:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:07:00.419 [2024-11-26 19:13:34.151279] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:08.553 Initializing NVMe Controllers 00:07:08.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:08.553 Initialization complete. Launching workers. 00:07:08.553 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 244, failed: 34317 00:07:08.553 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 34444, failed to submit 117 00:07:08.553 success 34348, unsuccessful 96, failed 0 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.553 rmmod nvme_tcp 00:07:08.553 rmmod nvme_fabrics 00:07:08.553 rmmod nvme_keyring 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3541380 ']' 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3541380 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3541380 ']' 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3541380 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3541380 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3541380' 00:07:08.553 killing process with pid 3541380 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3541380 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3541380 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.553 19:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:09.931 00:07:09.931 real 0m32.719s 00:07:09.931 user 0m45.048s 00:07:09.931 sys 0m9.828s 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:09.931 ************************************ 00:07:09.931 END TEST nvmf_zcopy 00:07:09.931 ************************************ 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:09.931 ************************************ 00:07:09.931 START TEST nvmf_nmic 00:07:09.931 ************************************ 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:09.931 * Looking for test storage... 00:07:09.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.931 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.190 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.190 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.190 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.190 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.190 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.190 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.190 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.190 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.191 --rc genhtml_branch_coverage=1 00:07:10.191 --rc genhtml_function_coverage=1 00:07:10.191 --rc genhtml_legend=1 00:07:10.191 --rc geninfo_all_blocks=1 00:07:10.191 --rc geninfo_unexecuted_blocks=1 00:07:10.191 00:07:10.191 ' 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.191 --rc genhtml_branch_coverage=1 00:07:10.191 --rc genhtml_function_coverage=1 00:07:10.191 --rc genhtml_legend=1 00:07:10.191 --rc geninfo_all_blocks=1 00:07:10.191 --rc geninfo_unexecuted_blocks=1 00:07:10.191 00:07:10.191 ' 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.191 --rc genhtml_branch_coverage=1 00:07:10.191 --rc genhtml_function_coverage=1 00:07:10.191 --rc genhtml_legend=1 00:07:10.191 --rc geninfo_all_blocks=1 00:07:10.191 --rc geninfo_unexecuted_blocks=1 00:07:10.191 00:07:10.191 ' 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.191 --rc genhtml_branch_coverage=1 00:07:10.191 --rc genhtml_function_coverage=1 00:07:10.191 --rc genhtml_legend=1 00:07:10.191 --rc geninfo_all_blocks=1 00:07:10.191 --rc geninfo_unexecuted_blocks=1 00:07:10.191 00:07:10.191 ' 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.191 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:07:10.192 19:13:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:15.477 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:15.477 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:15.477 Found net devices under 0000:31:00.0: cvl_0_0 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:15.477 Found net devices under 0000:31:00.1: cvl_0_1 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.477 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.478 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.478 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.478 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:15.478 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.478 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:15.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:07:15.737 00:07:15.737 --- 10.0.0.2 ping statistics --- 00:07:15.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.737 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:07:15.737 00:07:15.737 --- 10.0.0.1 ping statistics --- 00:07:15.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.737 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3551102 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3551102 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3551102 ']' 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:15.737 [2024-11-26 19:13:49.419369] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:07:15.737 [2024-11-26 19:13:49.419424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.737 [2024-11-26 19:13:49.482654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.737 [2024-11-26 19:13:49.515248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.737 [2024-11-26 19:13:49.515278] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.737 [2024-11-26 19:13:49.515284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.737 [2024-11-26 19:13:49.515289] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.737 [2024-11-26 19:13:49.515293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.737 [2024-11-26 19:13:49.516800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.737 [2024-11-26 19:13:49.516957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.737 [2024-11-26 19:13:49.517072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.737 [2024-11-26 19:13:49.517073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:15.737 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 [2024-11-26 19:13:49.625831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 Malloc0 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 [2024-11-26 19:13:49.678943] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:07:15.997 test case1: single bdev can't be used in multiple subsystems 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 [2024-11-26 19:13:49.702831] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:07:15.997 [2024-11-26 19:13:49.702846] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:07:15.997 [2024-11-26 19:13:49.702852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:15.997 request: 00:07:15.997 { 00:07:15.997 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:15.997 "namespace": { 00:07:15.997 "bdev_name": "Malloc0", 00:07:15.997 "no_auto_visible": false, 00:07:15.997 "hide_metadata": false 00:07:15.997 }, 00:07:15.997 "method": "nvmf_subsystem_add_ns", 00:07:15.997 "req_id": 1 00:07:15.997 } 00:07:15.997 Got JSON-RPC error response 00:07:15.997 response: 00:07:15.997 { 00:07:15.997 "code": -32602, 00:07:15.997 "message": "Invalid parameters" 00:07:15.997 } 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:07:15.997 Adding namespace failed - expected result. 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:07:15.997 test case2: host connect to nvmf target in multiple paths 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:15.997 [2024-11-26 19:13:49.710936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.997 19:13:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:17.905 19:13:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:07:19.284 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:07:19.284 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:07:19.284 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:07:19.284 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:07:19.284 19:13:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:07:21.208 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:07:21.208 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:07:21.208 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:07:21.208 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:07:21.208 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:07:21.208 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:07:21.208 19:13:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:07:21.208 [global] 00:07:21.208 thread=1 00:07:21.208 invalidate=1 00:07:21.208 rw=write 00:07:21.208 time_based=1 00:07:21.208 runtime=1 00:07:21.208 ioengine=libaio 00:07:21.208 direct=1 00:07:21.208 bs=4096 00:07:21.208 iodepth=1 00:07:21.208 norandommap=0 00:07:21.208 numjobs=1 00:07:21.208 00:07:21.208 verify_dump=1 00:07:21.208 verify_backlog=512 00:07:21.208 verify_state_save=0 00:07:21.208 do_verify=1 00:07:21.208 verify=crc32c-intel 00:07:21.208 [job0] 00:07:21.208 filename=/dev/nvme0n1 00:07:21.208 Could not set queue depth (nvme0n1) 00:07:21.472 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:21.472 fio-3.35 00:07:21.472 Starting 1 thread 00:07:22.854 00:07:22.854 job0: (groupid=0, jobs=1): err= 0: pid=3552639: Tue Nov 26 19:13:56 2024 00:07:22.854 read: IOPS=878, BW=3512KiB/s (3597kB/s)(3516KiB/1001msec) 00:07:22.855 slat (nsec): min=2973, max=59586, avg=14123.44, stdev=7251.53 00:07:22.855 clat (usec): min=212, max=938, avg=666.15, stdev=121.28 00:07:22.855 lat (usec): min=222, max=962, avg=680.27, stdev=121.28 00:07:22.855 clat percentiles (usec): 00:07:22.855 | 1.00th=[ 379], 5.00th=[ 441], 10.00th=[ 482], 20.00th=[ 562], 00:07:22.855 | 30.00th=[ 603], 40.00th=[ 652], 50.00th=[ 685], 60.00th=[ 717], 00:07:22.855 | 70.00th=[ 750], 80.00th=[ 775], 90.00th=[ 799], 95.00th=[ 832], 00:07:22.855 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 938], 99.95th=[ 938], 00:07:22.855 | 99.99th=[ 938] 00:07:22.855 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:07:22.855 slat (nsec): min=4128, max=49387, avg=16972.46, stdev=9185.67 00:07:22.855 clat (usec): min=112, max=689, avg=367.35, stdev=101.47 00:07:22.855 lat (usec): min=122, max=700, avg=384.33, stdev=103.52 00:07:22.855 clat percentiles (usec): 00:07:22.855 | 1.00th=[ 172], 5.00th=[ 212], 10.00th=[ 249], 20.00th=[ 281], 00:07:22.855 | 30.00th=[ 306], 40.00th=[ 330], 50.00th=[ 359], 60.00th=[ 383], 00:07:22.855 | 70.00th=[ 412], 80.00th=[ 457], 90.00th=[ 515], 95.00th=[ 545], 00:07:22.855 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 685], 99.95th=[ 693], 00:07:22.855 | 99.99th=[ 693] 00:07:22.855 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:07:22.855 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:22.855 lat (usec) : 250=5.57%, 500=46.82%, 750=34.05%, 1000=13.56% 00:07:22.855 cpu : usr=1.80%, sys=2.80%, ctx=1903, majf=0, minf=1 00:07:22.855 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:22.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:22.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:22.855 issued rwts: total=879,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:22.855 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:22.855 00:07:22.855 Run status group 0 (all jobs): 00:07:22.855 READ: bw=3512KiB/s (3597kB/s), 3512KiB/s-3512KiB/s (3597kB/s-3597kB/s), io=3516KiB (3600kB), run=1001-1001msec 00:07:22.855 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:07:22.855 00:07:22.855 Disk stats (read/write): 00:07:22.855 nvme0n1: ios=779/1024, merge=0/0, ticks=533/364, in_queue=897, util=93.69% 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:22.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:22.855 rmmod nvme_tcp 00:07:22.855 rmmod nvme_fabrics 00:07:22.855 rmmod nvme_keyring 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3551102 ']' 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3551102 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3551102 ']' 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3551102 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3551102 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3551102' 00:07:22.855 killing process with pid 3551102 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3551102 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3551102 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.855 19:13:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.421 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:25.421 00:07:25.421 real 0m15.067s 00:07:25.421 user 0m41.306s 00:07:25.421 sys 0m4.910s 00:07:25.421 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.421 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:25.421 ************************************ 00:07:25.422 END TEST nvmf_nmic 00:07:25.422 ************************************ 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:25.422 ************************************ 00:07:25.422 START TEST nvmf_fio_target 00:07:25.422 ************************************ 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:07:25.422 * Looking for test storage... 00:07:25.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:07:25.422 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.423 --rc genhtml_branch_coverage=1 00:07:25.423 --rc genhtml_function_coverage=1 00:07:25.423 --rc genhtml_legend=1 00:07:25.423 --rc geninfo_all_blocks=1 00:07:25.423 --rc geninfo_unexecuted_blocks=1 00:07:25.423 00:07:25.423 ' 00:07:25.423 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.423 --rc genhtml_branch_coverage=1 00:07:25.423 --rc genhtml_function_coverage=1 00:07:25.423 --rc genhtml_legend=1 00:07:25.424 --rc geninfo_all_blocks=1 00:07:25.424 --rc geninfo_unexecuted_blocks=1 00:07:25.424 00:07:25.424 ' 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.424 --rc genhtml_branch_coverage=1 00:07:25.424 --rc genhtml_function_coverage=1 00:07:25.424 --rc genhtml_legend=1 00:07:25.424 --rc geninfo_all_blocks=1 00:07:25.424 --rc geninfo_unexecuted_blocks=1 00:07:25.424 00:07:25.424 ' 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.424 --rc genhtml_branch_coverage=1 00:07:25.424 --rc genhtml_function_coverage=1 00:07:25.424 --rc genhtml_legend=1 00:07:25.424 --rc geninfo_all_blocks=1 00:07:25.424 --rc geninfo_unexecuted_blocks=1 00:07:25.424 00:07:25.424 ' 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:25.424 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.425 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:25.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:25.426 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:07:25.427 19:13:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:30.701 19:14:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:30.701 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:30.701 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:30.701 Found net devices under 0000:31:00.0: cvl_0_0 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:30.701 Found net devices under 0000:31:00.1: cvl_0_1 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.701 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:07:30.702 00:07:30.702 --- 10.0.0.2 ping statistics --- 00:07:30.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.702 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:07:30.702 00:07:30.702 --- 10.0.0.1 ping statistics --- 00:07:30.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.702 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3557324 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3557324 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3557324 ']' 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:30.702 19:14:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:30.702 [2024-11-26 19:14:04.336873] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:07:30.702 [2024-11-26 19:14:04.336922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.702 [2024-11-26 19:14:04.410753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.702 [2024-11-26 19:14:04.440912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.702 [2024-11-26 19:14:04.440942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.702 [2024-11-26 19:14:04.440948] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.702 [2024-11-26 19:14:04.440952] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.702 [2024-11-26 19:14:04.440956] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.702 [2024-11-26 19:14:04.442144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.702 [2024-11-26 19:14:04.442246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.702 [2024-11-26 19:14:04.442398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.702 [2024-11-26 19:14:04.442400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.270 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.270 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:07:31.270 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:31.270 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:31.270 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:31.528 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:31.528 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:31.528 [2024-11-26 19:14:05.285171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:31.528 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.788 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:07:31.788 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.788 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:07:32.047 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.047 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:07:32.047 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.307 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:07:32.307 19:14:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:07:32.307 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.566 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:07:32.566 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.825 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:07:32.825 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.825 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:07:32.825 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:07:33.084 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:33.343 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:07:33.343 19:14:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.343 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:07:33.343 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:33.601 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.601 [2024-11-26 19:14:07.434973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.601 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:07:33.860 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:07:34.119 19:14:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:35.495 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:07:35.495 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:07:35.495 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:07:35.495 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:07:35.495 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:07:35.495 19:14:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:07:37.397 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:07:37.397 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:07:37.397 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:07:37.397 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:07:37.397 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:07:37.397 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:07:37.685 19:14:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:07:37.685 [global] 00:07:37.685 thread=1 00:07:37.685 invalidate=1 00:07:37.685 rw=write 00:07:37.685 time_based=1 00:07:37.685 runtime=1 00:07:37.685 ioengine=libaio 00:07:37.685 direct=1 00:07:37.685 bs=4096 00:07:37.685 iodepth=1 00:07:37.685 norandommap=0 00:07:37.685 numjobs=1 00:07:37.685 00:07:37.685 verify_dump=1 00:07:37.685 verify_backlog=512 00:07:37.685 verify_state_save=0 00:07:37.685 do_verify=1 00:07:37.685 verify=crc32c-intel 00:07:37.685 [job0] 00:07:37.685 filename=/dev/nvme0n1 00:07:37.685 [job1] 00:07:37.685 filename=/dev/nvme0n2 00:07:37.685 [job2] 00:07:37.685 filename=/dev/nvme0n3 00:07:37.685 [job3] 00:07:37.685 filename=/dev/nvme0n4 00:07:37.685 Could not set queue depth (nvme0n1) 00:07:37.685 Could not set queue depth (nvme0n2) 00:07:37.685 Could not set queue depth (nvme0n3) 00:07:37.685 Could not set queue depth (nvme0n4) 00:07:37.947 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:37.947 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:37.947 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:37.947 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:37.947 fio-3.35 00:07:37.947 Starting 4 threads 00:07:39.326 00:07:39.326 job0: (groupid=0, jobs=1): err= 0: pid=3559050: Tue Nov 26 19:14:12 2024 00:07:39.326 read: IOPS=18, BW=74.2KiB/s (76.0kB/s)(76.0KiB/1024msec) 00:07:39.326 slat (nsec): min=10964, max=27087, avg=25876.74, stdev=3615.10 00:07:39.326 clat (usec): min=41888, max=42210, avg=41978.92, stdev=64.68 00:07:39.326 lat (usec): min=41915, max=42221, avg=42004.80, stdev=61.54 00:07:39.326 clat percentiles (usec): 00:07:39.326 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:07:39.326 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:07:39.327 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:07:39.327 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:07:39.327 | 99.99th=[42206] 00:07:39.327 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:07:39.327 slat (nsec): min=9820, max=51367, avg=26742.72, stdev=11457.64 00:07:39.327 clat (usec): min=158, max=689, avg=405.90, stdev=75.15 00:07:39.327 lat (usec): min=168, max=700, avg=432.64, stdev=80.58 00:07:39.327 clat percentiles (usec): 00:07:39.327 | 1.00th=[ 227], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 330], 00:07:39.327 | 30.00th=[ 351], 40.00th=[ 400], 50.00th=[ 424], 60.00th=[ 441], 00:07:39.327 | 70.00th=[ 453], 80.00th=[ 469], 90.00th=[ 490], 95.00th=[ 515], 00:07:39.327 | 99.00th=[ 570], 99.50th=[ 603], 99.90th=[ 693], 99.95th=[ 693], 00:07:39.327 | 99.99th=[ 693] 00:07:39.327 bw ( KiB/s): min= 4096, max= 4096, per=34.18%, avg=4096.00, stdev= 0.00, samples=1 00:07:39.327 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:39.327 lat (usec) : 250=2.45%, 500=87.57%, 750=6.40% 00:07:39.327 lat (msec) : 50=3.58% 00:07:39.327 cpu : usr=0.88%, sys=1.08%, ctx=534, majf=0, minf=1 00:07:39.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:39.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:39.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:39.327 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:39.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:39.327 job1: (groupid=0, jobs=1): err= 0: pid=3559065: Tue Nov 26 19:14:12 2024 00:07:39.327 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:07:39.327 slat (nsec): min=2636, max=57991, avg=16848.62, stdev=6634.35 00:07:39.327 clat (usec): min=596, max=1042, avg=876.31, stdev=67.13 00:07:39.327 lat (usec): min=608, max=1069, avg=893.16, stdev=68.62 00:07:39.327 clat percentiles (usec): 00:07:39.327 | 1.00th=[ 693], 5.00th=[ 766], 10.00th=[ 791], 20.00th=[ 824], 00:07:39.327 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 898], 00:07:39.327 | 70.00th=[ 914], 80.00th=[ 930], 90.00th=[ 955], 95.00th=[ 979], 00:07:39.327 | 99.00th=[ 1012], 99.50th=[ 1029], 99.90th=[ 1045], 99.95th=[ 1045], 00:07:39.327 | 99.99th=[ 1045] 00:07:39.327 write: IOPS=1018, BW=4076KiB/s (4174kB/s)(4080KiB/1001msec); 0 zone resets 00:07:39.327 slat (nsec): min=3387, max=54929, avg=16585.56, stdev=10557.52 00:07:39.327 clat (usec): min=197, max=874, avg=508.95, stdev=122.76 00:07:39.327 lat (usec): min=201, max=888, avg=525.54, stdev=127.25 00:07:39.327 clat percentiles (usec): 00:07:39.327 | 1.00th=[ 258], 5.00th=[ 297], 10.00th=[ 334], 20.00th=[ 400], 00:07:39.327 | 30.00th=[ 441], 40.00th=[ 486], 50.00th=[ 515], 60.00th=[ 537], 00:07:39.327 | 70.00th=[ 578], 80.00th=[ 619], 90.00th=[ 668], 95.00th=[ 709], 00:07:39.327 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 840], 99.95th=[ 873], 00:07:39.327 | 99.99th=[ 873] 00:07:39.327 bw ( KiB/s): min= 4096, max= 4096, per=34.18%, avg=4096.00, stdev= 0.00, samples=1 00:07:39.327 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:39.327 lat (usec) : 250=0.46%, 500=29.57%, 750=36.49%, 1000=32.77% 00:07:39.327 lat (msec) : 2=0.72% 00:07:39.327 cpu : usr=1.50%, sys=4.10%, ctx=1534, majf=0, minf=1 00:07:39.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:39.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:39.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:39.327 issued rwts: total=512,1020,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:39.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:39.327 job2: (groupid=0, jobs=1): err= 0: pid=3559090: Tue Nov 26 19:14:12 2024 00:07:39.327 read: IOPS=726, BW=2905KiB/s (2975kB/s)(2908KiB/1001msec) 00:07:39.327 slat (nsec): min=3301, max=44195, avg=16123.15, stdev=8235.99 00:07:39.327 clat (usec): min=229, max=968, avg=709.30, stdev=131.48 00:07:39.327 lat (usec): min=233, max=980, avg=725.43, stdev=132.54 00:07:39.327 clat percentiles (usec): 00:07:39.327 | 1.00th=[ 343], 5.00th=[ 461], 10.00th=[ 519], 20.00th=[ 603], 00:07:39.327 | 30.00th=[ 652], 40.00th=[ 701], 50.00th=[ 734], 60.00th=[ 758], 00:07:39.327 | 70.00th=[ 791], 80.00th=[ 816], 90.00th=[ 865], 95.00th=[ 898], 00:07:39.327 | 99.00th=[ 947], 99.50th=[ 947], 99.90th=[ 971], 99.95th=[ 971], 00:07:39.327 | 99.99th=[ 971] 00:07:39.327 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:07:39.327 slat (nsec): min=4172, max=60179, avg=15635.83, stdev=7302.11 00:07:39.327 clat (usec): min=89, max=810, avg=438.16, stdev=112.50 00:07:39.327 lat (usec): min=94, max=815, avg=453.80, stdev=114.65 00:07:39.327 clat percentiles (usec): 00:07:39.327 | 1.00th=[ 196], 5.00th=[ 243], 10.00th=[ 297], 20.00th=[ 338], 00:07:39.327 | 30.00th=[ 375], 40.00th=[ 412], 50.00th=[ 437], 60.00th=[ 469], 00:07:39.327 | 70.00th=[ 498], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 627], 00:07:39.327 | 99.00th=[ 693], 99.50th=[ 701], 99.90th=[ 750], 99.95th=[ 807], 00:07:39.327 | 99.99th=[ 807] 00:07:39.327 bw ( KiB/s): min= 4096, max= 4096, per=34.18%, avg=4096.00, stdev= 0.00, samples=1 00:07:39.327 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:39.327 lat (usec) : 100=0.11%, 250=3.43%, 500=41.40%, 750=36.89%, 1000=18.16% 00:07:39.327 cpu : usr=1.10%, sys=2.90%, ctx=1753, majf=0, minf=1 00:07:39.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:39.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:39.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:39.327 issued rwts: total=727,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:39.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:39.327 job3: (groupid=0, jobs=1): err= 0: pid=3559096: Tue Nov 26 19:14:12 2024 00:07:39.327 read: IOPS=17, BW=71.8KiB/s (73.5kB/s)(72.0KiB/1003msec) 00:07:39.327 slat (nsec): min=26370, max=27314, avg=26903.94, stdev=267.57 00:07:39.327 clat (usec): min=1052, max=42105, avg=39630.93, stdev=9631.42 00:07:39.327 lat (usec): min=1078, max=42133, avg=39657.83, stdev=9631.49 00:07:39.327 clat percentiles (usec): 00:07:39.327 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41681], 00:07:39.327 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:07:39.327 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:07:39.327 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:07:39.327 | 99.99th=[42206] 00:07:39.327 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:07:39.327 slat (usec): min=4, max=4923, avg=26.03, stdev=217.07 00:07:39.327 clat (usec): min=257, max=882, avg=532.00, stdev=118.93 00:07:39.327 lat (usec): min=261, max=5680, avg=558.03, stdev=256.52 00:07:39.327 clat percentiles (usec): 00:07:39.327 | 1.00th=[ 302], 5.00th=[ 330], 10.00th=[ 379], 20.00th=[ 429], 00:07:39.327 | 30.00th=[ 461], 40.00th=[ 498], 50.00th=[ 529], 60.00th=[ 570], 00:07:39.327 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 734], 00:07:39.327 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 881], 99.95th=[ 881], 00:07:39.327 | 99.99th=[ 881] 00:07:39.327 bw ( KiB/s): min= 4096, max= 4096, per=34.18%, avg=4096.00, stdev= 0.00, samples=1 00:07:39.327 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:39.327 lat (usec) : 500=38.87%, 750=54.34%, 1000=3.40% 00:07:39.327 lat (msec) : 2=0.19%, 50=3.21% 00:07:39.327 cpu : usr=0.40%, sys=0.70%, ctx=533, majf=0, minf=1 00:07:39.327 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:39.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:39.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:39.327 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:39.327 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:39.327 00:07:39.327 Run status group 0 (all jobs): 00:07:39.327 READ: bw=4984KiB/s (5104kB/s), 71.8KiB/s-2905KiB/s (73.5kB/s-2975kB/s), io=5104KiB (5226kB), run=1001-1024msec 00:07:39.327 WRITE: bw=11.7MiB/s (12.3MB/s), 2000KiB/s-4092KiB/s (2048kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1024msec 00:07:39.327 00:07:39.327 Disk stats (read/write): 00:07:39.327 nvme0n1: ios=69/512, merge=0/0, ticks=804/207, in_queue=1011, util=87.07% 00:07:39.327 nvme0n2: ios=567/720, merge=0/0, ticks=483/291, in_queue=774, util=91.22% 00:07:39.327 nvme0n3: ios=575/999, merge=0/0, ticks=935/414, in_queue=1349, util=95.34% 00:07:39.327 nvme0n4: ios=62/512, merge=0/0, ticks=969/258, in_queue=1227, util=96.79% 00:07:39.327 19:14:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:07:39.327 [global] 00:07:39.327 thread=1 00:07:39.327 invalidate=1 00:07:39.327 rw=randwrite 00:07:39.327 time_based=1 00:07:39.327 runtime=1 00:07:39.327 ioengine=libaio 00:07:39.327 direct=1 00:07:39.327 bs=4096 00:07:39.327 iodepth=1 00:07:39.327 norandommap=0 00:07:39.327 numjobs=1 00:07:39.327 00:07:39.327 verify_dump=1 00:07:39.327 verify_backlog=512 00:07:39.327 verify_state_save=0 00:07:39.327 do_verify=1 00:07:39.327 verify=crc32c-intel 00:07:39.327 [job0] 00:07:39.327 filename=/dev/nvme0n1 00:07:39.327 [job1] 00:07:39.327 filename=/dev/nvme0n2 00:07:39.327 [job2] 00:07:39.327 filename=/dev/nvme0n3 00:07:39.327 [job3] 00:07:39.327 filename=/dev/nvme0n4 00:07:39.327 Could not set queue depth (nvme0n1) 00:07:39.327 Could not set queue depth (nvme0n2) 00:07:39.327 Could not set queue depth (nvme0n3) 00:07:39.327 Could not set queue depth (nvme0n4) 00:07:39.327 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:39.327 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:39.327 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:39.327 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:39.327 fio-3.35 00:07:39.327 Starting 4 threads 00:07:40.703 00:07:40.703 job0: (groupid=0, jobs=1): err= 0: pid=3559575: Tue Nov 26 19:14:14 2024 00:07:40.703 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:07:40.703 slat (nsec): min=3959, max=61940, avg=17347.20, stdev=7092.19 00:07:40.703 clat (usec): min=480, max=4189, avg=869.93, stdev=171.97 00:07:40.703 lat (usec): min=492, max=4217, avg=887.28, stdev=172.24 00:07:40.703 clat percentiles (usec): 00:07:40.703 | 1.00th=[ 562], 5.00th=[ 693], 10.00th=[ 750], 20.00th=[ 807], 00:07:40.703 | 30.00th=[ 840], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 898], 00:07:40.703 | 70.00th=[ 914], 80.00th=[ 930], 90.00th=[ 955], 95.00th=[ 979], 00:07:40.703 | 99.00th=[ 1020], 99.50th=[ 1029], 99.90th=[ 4178], 99.95th=[ 4178], 00:07:40.703 | 99.99th=[ 4178] 00:07:40.703 write: IOPS=983, BW=3932KiB/s (4026kB/s)(3936KiB/1001msec); 0 zone resets 00:07:40.703 slat (nsec): min=3710, max=68198, avg=19348.54, stdev=11064.25 00:07:40.703 clat (usec): min=216, max=3864, avg=526.85, stdev=161.33 00:07:40.703 lat (usec): min=226, max=3880, avg=546.20, stdev=163.29 00:07:40.703 clat percentiles (usec): 00:07:40.703 | 1.00th=[ 253], 5.00th=[ 322], 10.00th=[ 371], 20.00th=[ 412], 00:07:40.703 | 30.00th=[ 457], 40.00th=[ 502], 50.00th=[ 523], 60.00th=[ 545], 00:07:40.703 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 725], 00:07:40.703 | 99.00th=[ 816], 99.50th=[ 865], 99.90th=[ 3851], 99.95th=[ 3851], 00:07:40.703 | 99.99th=[ 3851] 00:07:40.703 bw ( KiB/s): min= 4096, max= 4096, per=41.29%, avg=4096.00, stdev= 0.00, samples=1 00:07:40.703 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:40.703 lat (usec) : 250=0.47%, 500=25.74%, 750=41.11%, 1000=32.09% 00:07:40.703 lat (msec) : 2=0.47%, 4=0.07%, 10=0.07% 00:07:40.703 cpu : usr=1.70%, sys=4.40%, ctx=1500, majf=0, minf=1 00:07:40.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:40.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:40.704 issued rwts: total=512,984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:40.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:40.704 job1: (groupid=0, jobs=1): err= 0: pid=3559590: Tue Nov 26 19:14:14 2024 00:07:40.704 read: IOPS=18, BW=75.6KiB/s (77.4kB/s)(76.0KiB/1005msec) 00:07:40.704 slat (nsec): min=4651, max=28461, avg=25726.21, stdev=5999.11 00:07:40.704 clat (usec): min=944, max=42965, avg=39843.99, stdev=9434.13 00:07:40.704 lat (usec): min=949, max=42993, avg=39869.71, stdev=9439.24 00:07:40.704 clat percentiles (usec): 00:07:40.704 | 1.00th=[ 947], 5.00th=[ 947], 10.00th=[41157], 20.00th=[41681], 00:07:40.704 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:07:40.704 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:07:40.704 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:07:40.704 | 99.99th=[42730] 00:07:40.704 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:07:40.704 slat (nsec): min=3638, max=50171, avg=14611.59, stdev=7131.96 00:07:40.704 clat (usec): min=100, max=779, avg=460.19, stdev=123.76 00:07:40.704 lat (usec): min=114, max=798, avg=474.80, stdev=124.30 00:07:40.704 clat percentiles (usec): 00:07:40.704 | 1.00th=[ 149], 5.00th=[ 262], 10.00th=[ 293], 20.00th=[ 355], 00:07:40.704 | 30.00th=[ 396], 40.00th=[ 437], 50.00th=[ 461], 60.00th=[ 498], 00:07:40.704 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 619], 95.00th=[ 652], 00:07:40.704 | 99.00th=[ 734], 99.50th=[ 750], 99.90th=[ 783], 99.95th=[ 783], 00:07:40.704 | 99.99th=[ 783] 00:07:40.704 bw ( KiB/s): min= 4096, max= 4096, per=41.29%, avg=4096.00, stdev= 0.00, samples=1 00:07:40.704 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:40.704 lat (usec) : 250=4.14%, 500=54.05%, 750=37.66%, 1000=0.75% 00:07:40.704 lat (msec) : 50=3.39% 00:07:40.704 cpu : usr=0.20%, sys=1.00%, ctx=533, majf=0, minf=1 00:07:40.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:40.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:40.704 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:40.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:40.704 job2: (groupid=0, jobs=1): err= 0: pid=3559613: Tue Nov 26 19:14:14 2024 00:07:40.704 read: IOPS=15, BW=63.8KiB/s (65.3kB/s)(64.0KiB/1003msec) 00:07:40.704 slat (nsec): min=10528, max=26100, avg=24436.19, stdev=3715.41 00:07:40.704 clat (usec): min=41004, max=42991, avg=42124.75, stdev=490.76 00:07:40.704 lat (usec): min=41030, max=43017, avg=42149.19, stdev=490.04 00:07:40.704 clat percentiles (usec): 00:07:40.704 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:07:40.704 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:07:40.704 | 70.00th=[42206], 80.00th=[42730], 90.00th=[42730], 95.00th=[43254], 00:07:40.704 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:07:40.704 | 99.99th=[43254] 00:07:40.704 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:07:40.704 slat (nsec): min=3755, max=33363, avg=12732.52, stdev=5497.43 00:07:40.704 clat (usec): min=203, max=949, avg=625.57, stdev=124.17 00:07:40.704 lat (usec): min=209, max=963, avg=638.30, stdev=125.51 00:07:40.704 clat percentiles (usec): 00:07:40.704 | 1.00th=[ 289], 5.00th=[ 420], 10.00th=[ 461], 20.00th=[ 519], 00:07:40.704 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 660], 00:07:40.704 | 70.00th=[ 693], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 824], 00:07:40.704 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 947], 00:07:40.704 | 99.99th=[ 947] 00:07:40.704 bw ( KiB/s): min= 4096, max= 4096, per=41.29%, avg=4096.00, stdev= 0.00, samples=1 00:07:40.704 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:40.704 lat (usec) : 250=0.57%, 500=14.58%, 750=66.67%, 1000=15.15% 00:07:40.704 lat (msec) : 50=3.03% 00:07:40.704 cpu : usr=0.60%, sys=0.30%, ctx=528, majf=0, minf=1 00:07:40.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:40.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:40.704 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:40.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:40.704 job3: (groupid=0, jobs=1): err= 0: pid=3559621: Tue Nov 26 19:14:14 2024 00:07:40.704 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:07:40.704 slat (nsec): min=3830, max=27194, avg=24819.73, stdev=5693.56 00:07:40.704 clat (usec): min=586, max=41996, avg=39530.33, stdev=8713.00 00:07:40.704 lat (usec): min=590, max=42023, avg=39555.15, stdev=8717.73 00:07:40.704 clat percentiles (usec): 00:07:40.704 | 1.00th=[ 586], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:07:40.704 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:07:40.704 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:07:40.704 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:07:40.704 | 99.99th=[42206] 00:07:40.704 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:07:40.704 slat (nsec): min=3283, max=51476, avg=10792.02, stdev=6628.15 00:07:40.704 clat (usec): min=90, max=832, avg=270.04, stdev=128.36 00:07:40.704 lat (usec): min=94, max=846, avg=280.83, stdev=132.12 00:07:40.704 clat percentiles (usec): 00:07:40.704 | 1.00th=[ 101], 5.00th=[ 106], 10.00th=[ 122], 20.00th=[ 130], 00:07:40.704 | 30.00th=[ 184], 40.00th=[ 225], 50.00th=[ 260], 60.00th=[ 293], 00:07:40.704 | 70.00th=[ 330], 80.00th=[ 375], 90.00th=[ 449], 95.00th=[ 502], 00:07:40.704 | 99.00th=[ 603], 99.50th=[ 660], 99.90th=[ 832], 99.95th=[ 832], 00:07:40.704 | 99.99th=[ 832] 00:07:40.704 bw ( KiB/s): min= 4096, max= 4096, per=41.29%, avg=4096.00, stdev= 0.00, samples=1 00:07:40.704 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:07:40.704 lat (usec) : 100=0.94%, 250=44.76%, 500=45.13%, 750=4.87%, 1000=0.37% 00:07:40.704 lat (msec) : 50=3.93% 00:07:40.704 cpu : usr=0.49%, sys=0.79%, ctx=534, majf=0, minf=1 00:07:40.704 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:40.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:40.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:40.704 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:40.704 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:40.704 00:07:40.704 Run status group 0 (all jobs): 00:07:40.704 READ: bw=2240KiB/s (2294kB/s), 63.8KiB/s-2046KiB/s (65.3kB/s-2095kB/s), io=2276KiB (2331kB), run=1001-1016msec 00:07:40.704 WRITE: bw=9921KiB/s (10.2MB/s), 2016KiB/s-3932KiB/s (2064kB/s-4026kB/s), io=9.84MiB (10.3MB), run=1001-1016msec 00:07:40.704 00:07:40.704 Disk stats (read/write): 00:07:40.704 nvme0n1: ios=553/681, merge=0/0, ticks=810/293, in_queue=1103, util=98.60% 00:07:40.704 nvme0n2: ios=44/512, merge=0/0, ticks=833/229, in_queue=1062, util=99.80% 00:07:40.704 nvme0n3: ios=63/512, merge=0/0, ticks=563/312, in_queue=875, util=90.61% 00:07:40.704 nvme0n4: ios=56/512, merge=0/0, ticks=720/85, in_queue=805, util=91.57% 00:07:40.704 19:14:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:07:40.704 [global] 00:07:40.704 thread=1 00:07:40.704 invalidate=1 00:07:40.704 rw=write 00:07:40.704 time_based=1 00:07:40.704 runtime=1 00:07:40.704 ioengine=libaio 00:07:40.704 direct=1 00:07:40.704 bs=4096 00:07:40.704 iodepth=128 00:07:40.704 norandommap=0 00:07:40.704 numjobs=1 00:07:40.704 00:07:40.704 verify_dump=1 00:07:40.704 verify_backlog=512 00:07:40.704 verify_state_save=0 00:07:40.704 do_verify=1 00:07:40.704 verify=crc32c-intel 00:07:40.704 [job0] 00:07:40.704 filename=/dev/nvme0n1 00:07:40.704 [job1] 00:07:40.704 filename=/dev/nvme0n2 00:07:40.704 [job2] 00:07:40.704 filename=/dev/nvme0n3 00:07:40.704 [job3] 00:07:40.704 filename=/dev/nvme0n4 00:07:40.704 Could not set queue depth (nvme0n1) 00:07:40.704 Could not set queue depth (nvme0n2) 00:07:40.704 Could not set queue depth (nvme0n3) 00:07:40.704 Could not set queue depth (nvme0n4) 00:07:40.963 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:40.963 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:40.963 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:40.963 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:40.963 fio-3.35 00:07:40.963 Starting 4 threads 00:07:42.360 00:07:42.360 job0: (groupid=0, jobs=1): err= 0: pid=3560107: Tue Nov 26 19:14:15 2024 00:07:42.360 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1008msec) 00:07:42.360 slat (nsec): min=983, max=19286k, avg=86096.58, stdev=726658.83 00:07:42.360 clat (usec): min=3000, max=45580, avg=11505.32, stdev=5790.85 00:07:42.360 lat (usec): min=3005, max=45589, avg=11591.42, stdev=5856.52 00:07:42.360 clat percentiles (usec): 00:07:42.360 | 1.00th=[ 5145], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 7504], 00:07:42.360 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[11207], 00:07:42.360 | 70.00th=[13304], 80.00th=[15139], 90.00th=[19530], 95.00th=[22676], 00:07:42.360 | 99.00th=[30016], 99.50th=[43254], 99.90th=[44827], 99.95th=[45351], 00:07:42.360 | 99.99th=[45351] 00:07:42.360 write: IOPS=6045, BW=23.6MiB/s (24.8MB/s)(23.8MiB/1008msec); 0 zone resets 00:07:42.360 slat (nsec): min=1665, max=21244k, avg=78513.54, stdev=566552.45 00:07:42.360 clat (usec): min=1419, max=45538, avg=10330.77, stdev=6364.88 00:07:42.360 lat (usec): min=1423, max=45540, avg=10409.28, stdev=6407.88 00:07:42.360 clat percentiles (usec): 00:07:42.360 | 1.00th=[ 2966], 5.00th=[ 4178], 10.00th=[ 5407], 20.00th=[ 6325], 00:07:42.360 | 30.00th=[ 6849], 40.00th=[ 7111], 50.00th=[ 7504], 60.00th=[ 8717], 00:07:42.360 | 70.00th=[12649], 80.00th=[13566], 90.00th=[17957], 95.00th=[25297], 00:07:42.360 | 99.00th=[35390], 99.50th=[38536], 99.90th=[41681], 99.95th=[41681], 00:07:42.360 | 99.99th=[45351] 00:07:42.360 bw ( KiB/s): min=23176, max=24601, per=24.93%, avg=23888.50, stdev=1007.63, samples=2 00:07:42.360 iops : min= 5794, max= 6150, avg=5972.00, stdev=251.73, samples=2 00:07:42.360 lat (msec) : 2=0.07%, 4=1.99%, 10=57.73%, 20=33.05%, 50=7.17% 00:07:42.360 cpu : usr=2.58%, sys=4.97%, ctx=419, majf=0, minf=1 00:07:42.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:07:42.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:42.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:42.360 issued rwts: total=5632,6094,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:42.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:42.360 job1: (groupid=0, jobs=1): err= 0: pid=3560120: Tue Nov 26 19:14:15 2024 00:07:42.360 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1005msec) 00:07:42.360 slat (nsec): min=954, max=19377k, avg=117943.63, stdev=911735.74 00:07:42.360 clat (usec): min=2541, max=56181, avg=14154.49, stdev=9305.57 00:07:42.360 lat (usec): min=3672, max=56191, avg=14272.43, stdev=9387.84 00:07:42.360 clat percentiles (usec): 00:07:42.360 | 1.00th=[ 5473], 5.00th=[ 7439], 10.00th=[ 7570], 20.00th=[ 7832], 00:07:42.360 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[13173], 00:07:42.360 | 70.00th=[15401], 80.00th=[20055], 90.00th=[24773], 95.00th=[34866], 00:07:42.360 | 99.00th=[54264], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361], 00:07:42.360 | 99.99th=[56361] 00:07:42.360 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:07:42.360 slat (nsec): min=1699, max=11375k, avg=122149.53, stdev=675376.72 00:07:42.360 clat (usec): min=859, max=63316, avg=16986.62, stdev=13467.39 00:07:42.360 lat (usec): min=864, max=63323, avg=17108.77, stdev=13547.25 00:07:42.360 clat percentiles (usec): 00:07:42.360 | 1.00th=[ 3884], 5.00th=[ 6718], 10.00th=[ 7177], 20.00th=[ 7570], 00:07:42.360 | 30.00th=[ 9765], 40.00th=[11731], 50.00th=[13042], 60.00th=[13566], 00:07:42.360 | 70.00th=[14091], 80.00th=[19530], 90.00th=[39060], 95.00th=[54264], 00:07:42.360 | 99.00th=[60556], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:07:42.360 | 99.99th=[63177] 00:07:42.360 bw ( KiB/s): min=12688, max=20120, per=17.12%, avg=16404.00, stdev=5255.22, samples=2 00:07:42.360 iops : min= 3172, max= 5030, avg=4101.00, stdev=1313.80, samples=2 00:07:42.360 lat (usec) : 1000=0.10% 00:07:42.360 lat (msec) : 4=0.59%, 10=41.22%, 20=38.07%, 50=16.17%, 100=3.85% 00:07:42.360 cpu : usr=3.19%, sys=3.19%, ctx=407, majf=0, minf=2 00:07:42.360 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:07:42.360 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:42.360 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:42.360 issued rwts: total=4084,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:42.360 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:42.360 job2: (groupid=0, jobs=1): err= 0: pid=3560140: Tue Nov 26 19:14:15 2024 00:07:42.360 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(29.9MiB/1003msec) 00:07:42.360 slat (nsec): min=975, max=4371.8k, avg=65520.24, stdev=413411.47 00:07:42.360 clat (usec): min=1770, max=16548, avg=8098.42, stdev=1250.79 00:07:42.360 lat (usec): min=2417, max=16550, avg=8163.94, stdev=1300.35 00:07:42.360 clat percentiles (usec): 00:07:42.361 | 1.00th=[ 4490], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7111], 00:07:42.361 | 30.00th=[ 7308], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8455], 00:07:42.361 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10028], 00:07:42.361 | 99.00th=[11469], 99.50th=[12125], 99.90th=[12780], 99.95th=[12911], 00:07:42.361 | 99.99th=[16581] 00:07:42.361 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:07:42.361 slat (nsec): min=1663, max=41807k, avg=60394.27, stdev=555471.81 00:07:42.361 clat (usec): min=2709, max=53728, avg=7842.58, stdev=2050.58 00:07:42.361 lat (usec): min=2711, max=53769, avg=7902.97, stdev=2125.58 00:07:42.361 clat percentiles (usec): 00:07:42.361 | 1.00th=[ 4424], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 6652], 00:07:42.361 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8029], 00:07:42.361 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 9241], 95.00th=[10945], 00:07:42.361 | 99.00th=[15401], 99.50th=[15664], 99.90th=[42206], 99.95th=[42730], 00:07:42.361 | 99.99th=[53740] 00:07:42.361 bw ( KiB/s): min=28672, max=32768, per=32.06%, avg=30720.00, stdev=2896.31, samples=2 00:07:42.361 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:07:42.361 lat (msec) : 2=0.01%, 4=0.46%, 10=93.31%, 20=6.16%, 50=0.05% 00:07:42.361 lat (msec) : 100=0.01% 00:07:42.361 cpu : usr=2.79%, sys=6.59%, ctx=1043, majf=0, minf=1 00:07:42.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:07:42.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:42.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:42.361 issued rwts: total=7649,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:42.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:42.361 job3: (groupid=0, jobs=1): err= 0: pid=3560145: Tue Nov 26 19:14:15 2024 00:07:42.361 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:07:42.361 slat (nsec): min=980, max=18111k, avg=93890.22, stdev=810645.21 00:07:42.361 clat (usec): min=2696, max=65218, avg=11322.78, stdev=9950.70 00:07:42.361 lat (usec): min=2700, max=65243, avg=11416.68, stdev=10024.73 00:07:42.361 clat percentiles (usec): 00:07:42.361 | 1.00th=[ 3949], 5.00th=[ 6521], 10.00th=[ 7177], 20.00th=[ 7767], 00:07:42.361 | 30.00th=[ 7898], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8586], 00:07:42.361 | 70.00th=[ 9765], 80.00th=[11600], 90.00th=[14484], 95.00th=[25560], 00:07:42.361 | 99.00th=[57934], 99.50th=[61080], 99.90th=[65274], 99.95th=[65274], 00:07:42.361 | 99.99th=[65274] 00:07:42.361 write: IOPS=6254, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1004msec); 0 zone resets 00:07:42.361 slat (nsec): min=1675, max=14876k, avg=63704.51, stdev=433383.18 00:07:42.361 clat (usec): min=1152, max=65145, avg=9189.58, stdev=8516.10 00:07:42.361 lat (usec): min=1159, max=65157, avg=9253.29, stdev=8560.35 00:07:42.361 clat percentiles (usec): 00:07:42.361 | 1.00th=[ 2278], 5.00th=[ 3752], 10.00th=[ 4752], 20.00th=[ 6915], 00:07:42.361 | 30.00th=[ 7504], 40.00th=[ 7767], 50.00th=[ 7832], 60.00th=[ 7963], 00:07:42.361 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[10290], 95.00th=[16581], 00:07:42.361 | 99.00th=[60556], 99.50th=[62653], 99.90th=[63177], 99.95th=[63177], 00:07:42.361 | 99.99th=[65274] 00:07:42.361 bw ( KiB/s): min=16616, max=32689, per=25.72%, avg=24652.50, stdev=11365.33, samples=2 00:07:42.361 iops : min= 4154, max= 8172, avg=6163.00, stdev=2841.16, samples=2 00:07:42.361 lat (msec) : 2=0.19%, 4=3.41%, 10=76.35%, 20=15.03%, 50=2.27% 00:07:42.361 lat (msec) : 100=2.74% 00:07:42.361 cpu : usr=2.69%, sys=4.79%, ctx=744, majf=0, minf=1 00:07:42.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:07:42.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:42.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:42.361 issued rwts: total=6144,6280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:42.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:42.361 00:07:42.361 Run status group 0 (all jobs): 00:07:42.361 READ: bw=91.1MiB/s (95.5MB/s), 15.9MiB/s-29.8MiB/s (16.6MB/s-31.2MB/s), io=91.8MiB (96.3MB), run=1003-1008msec 00:07:42.361 WRITE: bw=93.6MiB/s (98.1MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.4MB/s), io=94.3MiB (98.9MB), run=1003-1008msec 00:07:42.361 00:07:42.361 Disk stats (read/write): 00:07:42.361 nvme0n1: ios=4655/5120, merge=0/0, ticks=52991/48957, in_queue=101948, util=91.18% 00:07:42.361 nvme0n2: ios=3623/3655, merge=0/0, ticks=51796/50752, in_queue=102548, util=96.86% 00:07:42.361 nvme0n3: ios=6195/6479, merge=0/0, ticks=25759/26291, in_queue=52050, util=95.42% 00:07:42.361 nvme0n4: ios=5784/6144, merge=0/0, ticks=49965/45162, in_queue=95127, util=99.06% 00:07:42.361 19:14:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:07:42.361 [global] 00:07:42.361 thread=1 00:07:42.361 invalidate=1 00:07:42.361 rw=randwrite 00:07:42.361 time_based=1 00:07:42.361 runtime=1 00:07:42.361 ioengine=libaio 00:07:42.361 direct=1 00:07:42.361 bs=4096 00:07:42.361 iodepth=128 00:07:42.361 norandommap=0 00:07:42.361 numjobs=1 00:07:42.361 00:07:42.361 verify_dump=1 00:07:42.361 verify_backlog=512 00:07:42.361 verify_state_save=0 00:07:42.361 do_verify=1 00:07:42.361 verify=crc32c-intel 00:07:42.361 [job0] 00:07:42.361 filename=/dev/nvme0n1 00:07:42.361 [job1] 00:07:42.361 filename=/dev/nvme0n2 00:07:42.361 [job2] 00:07:42.361 filename=/dev/nvme0n3 00:07:42.361 [job3] 00:07:42.361 filename=/dev/nvme0n4 00:07:42.361 Could not set queue depth (nvme0n1) 00:07:42.361 Could not set queue depth (nvme0n2) 00:07:42.361 Could not set queue depth (nvme0n3) 00:07:42.361 Could not set queue depth (nvme0n4) 00:07:42.621 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:42.621 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:42.621 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:42.621 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:07:42.621 fio-3.35 00:07:42.621 Starting 4 threads 00:07:43.658 00:07:43.658 job0: (groupid=0, jobs=1): err= 0: pid=3560605: Tue Nov 26 19:14:17 2024 00:07:43.658 read: IOPS=8645, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec) 00:07:43.658 slat (nsec): min=891, max=12139k, avg=56298.48, stdev=478477.18 00:07:43.658 clat (usec): min=2695, max=53025, avg=7868.58, stdev=2281.77 00:07:43.658 lat (usec): min=2699, max=53026, avg=7924.88, stdev=2313.85 00:07:43.658 clat percentiles (usec): 00:07:43.658 | 1.00th=[ 3228], 5.00th=[ 5276], 10.00th=[ 6128], 20.00th=[ 6652], 00:07:43.658 | 30.00th=[ 6783], 40.00th=[ 7046], 50.00th=[ 7373], 60.00th=[ 7701], 00:07:43.658 | 70.00th=[ 8356], 80.00th=[ 9110], 90.00th=[10552], 95.00th=[12125], 00:07:43.658 | 99.00th=[13173], 99.50th=[13173], 99.90th=[43254], 99.95th=[43254], 00:07:43.658 | 99.99th=[53216] 00:07:43.658 write: IOPS=8652, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1006msec); 0 zone resets 00:07:43.658 slat (nsec): min=1536, max=12953k, avg=43277.15, stdev=368206.67 00:07:43.658 clat (usec): min=418, max=19592, avg=6805.79, stdev=2597.47 00:07:43.658 lat (usec): min=424, max=24517, avg=6849.07, stdev=2620.59 00:07:43.658 clat percentiles (usec): 00:07:43.658 | 1.00th=[ 1614], 5.00th=[ 3163], 10.00th=[ 4015], 20.00th=[ 4883], 00:07:43.658 | 30.00th=[ 5669], 40.00th=[ 6390], 50.00th=[ 6783], 60.00th=[ 6980], 00:07:43.658 | 70.00th=[ 7242], 80.00th=[ 7635], 90.00th=[ 9896], 95.00th=[12518], 00:07:43.658 | 99.00th=[15270], 99.50th=[15401], 99.90th=[15664], 99.95th=[16057], 00:07:43.658 | 99.99th=[19530] 00:07:43.658 bw ( KiB/s): min=32768, max=36864, per=33.60%, avg=34816.00, stdev=2896.31, samples=2 00:07:43.658 iops : min= 8192, max= 9216, avg=8704.00, stdev=724.08, samples=2 00:07:43.658 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.06% 00:07:43.658 lat (msec) : 2=0.68%, 4=5.14%, 10=82.91%, 20=11.12%, 50=0.06% 00:07:43.658 lat (msec) : 100=0.01% 00:07:43.658 cpu : usr=3.68%, sys=4.58%, ctx=674, majf=0, minf=1 00:07:43.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:07:43.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:43.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:43.659 issued rwts: total=8697,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:43.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:43.659 job1: (groupid=0, jobs=1): err= 0: pid=3560619: Tue Nov 26 19:14:17 2024 00:07:43.659 read: IOPS=6318, BW=24.7MiB/s (25.9MB/s)(24.8MiB/1003msec) 00:07:43.659 slat (nsec): min=883, max=14265k, avg=76421.29, stdev=523107.64 00:07:43.659 clat (usec): min=1103, max=28812, avg=9447.27, stdev=2958.08 00:07:43.659 lat (usec): min=3114, max=38132, avg=9523.69, stdev=3001.40 00:07:43.659 clat percentiles (usec): 00:07:43.659 | 1.00th=[ 4752], 5.00th=[ 6259], 10.00th=[ 6521], 20.00th=[ 7242], 00:07:43.659 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9503], 00:07:43.659 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[12780], 95.00th=[15139], 00:07:43.659 | 99.00th=[20317], 99.50th=[21627], 99.90th=[24511], 99.95th=[26084], 00:07:43.659 | 99.99th=[28705] 00:07:43.659 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:07:43.659 slat (nsec): min=1497, max=18431k, avg=73185.78, stdev=487123.00 00:07:43.659 clat (usec): min=1305, max=56997, avg=10091.46, stdev=5042.87 00:07:43.659 lat (usec): min=1315, max=57000, avg=10164.65, stdev=5067.85 00:07:43.659 clat percentiles (usec): 00:07:43.659 | 1.00th=[ 3851], 5.00th=[ 5014], 10.00th=[ 5538], 20.00th=[ 6652], 00:07:43.659 | 30.00th=[ 7177], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9765], 00:07:43.659 | 70.00th=[11207], 80.00th=[13698], 90.00th=[15664], 95.00th=[17433], 00:07:43.659 | 99.00th=[33424], 99.50th=[33424], 99.90th=[56886], 99.95th=[56886], 00:07:43.659 | 99.99th=[56886] 00:07:43.659 bw ( KiB/s): min=24576, max=28672, per=25.69%, avg=26624.00, stdev=2896.31, samples=2 00:07:43.659 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:07:43.659 lat (msec) : 2=0.03%, 4=0.81%, 10=67.91%, 20=29.34%, 50=1.85% 00:07:43.659 lat (msec) : 100=0.06% 00:07:43.659 cpu : usr=2.40%, sys=4.89%, ctx=649, majf=0, minf=1 00:07:43.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:07:43.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:43.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:43.659 issued rwts: total=6337,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:43.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:43.659 job2: (groupid=0, jobs=1): err= 0: pid=3560643: Tue Nov 26 19:14:17 2024 00:07:43.659 read: IOPS=6238, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1005msec) 00:07:43.659 slat (nsec): min=982, max=13324k, avg=84527.10, stdev=632764.85 00:07:43.659 clat (usec): min=967, max=56911, avg=10274.78, stdev=4543.47 00:07:43.659 lat (usec): min=3139, max=56921, avg=10359.31, stdev=4598.40 00:07:43.659 clat percentiles (usec): 00:07:43.659 | 1.00th=[ 4293], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 7898], 00:07:43.659 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9765], 00:07:43.659 | 70.00th=[10683], 80.00th=[12125], 90.00th=[13698], 95.00th=[15401], 00:07:43.659 | 99.00th=[32900], 99.50th=[41681], 99.90th=[51119], 99.95th=[56886], 00:07:43.659 | 99.99th=[56886] 00:07:43.659 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:07:43.659 slat (nsec): min=1648, max=8508.2k, avg=66689.55, stdev=406202.76 00:07:43.659 clat (usec): min=1421, max=56916, avg=9428.97, stdev=6393.47 00:07:43.659 lat (usec): min=1426, max=56925, avg=9495.66, stdev=6431.60 00:07:43.659 clat percentiles (usec): 00:07:43.659 | 1.00th=[ 2999], 5.00th=[ 4555], 10.00th=[ 5604], 20.00th=[ 7177], 00:07:43.659 | 30.00th=[ 7767], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8291], 00:07:43.659 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[11994], 95.00th=[18482], 00:07:43.659 | 99.00th=[46400], 99.50th=[47973], 99.90th=[49546], 99.95th=[49546], 00:07:43.659 | 99.99th=[56886] 00:07:43.659 bw ( KiB/s): min=24560, max=28672, per=25.69%, avg=26616.00, stdev=2907.62, samples=2 00:07:43.659 iops : min= 6140, max= 7168, avg=6654.00, stdev=726.91, samples=2 00:07:43.659 lat (usec) : 1000=0.01% 00:07:43.659 lat (msec) : 2=0.07%, 4=1.92%, 10=73.08%, 20=21.96%, 50=2.79% 00:07:43.659 lat (msec) : 100=0.18% 00:07:43.659 cpu : usr=3.19%, sys=6.18%, ctx=711, majf=0, minf=1 00:07:43.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:07:43.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:43.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:43.659 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:43.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:43.659 job3: (groupid=0, jobs=1): err= 0: pid=3560652: Tue Nov 26 19:14:17 2024 00:07:43.659 read: IOPS=3632, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1008msec) 00:07:43.659 slat (nsec): min=941, max=27003k, avg=162418.42, stdev=1206693.17 00:07:43.659 clat (usec): min=3421, max=84148, avg=21601.24, stdev=19265.35 00:07:43.659 lat (usec): min=3424, max=84153, avg=21763.66, stdev=19369.42 00:07:43.659 clat percentiles (usec): 00:07:43.659 | 1.00th=[ 5866], 5.00th=[ 8160], 10.00th=[ 9110], 20.00th=[ 9765], 00:07:43.659 | 30.00th=[10290], 40.00th=[11863], 50.00th=[13566], 60.00th=[15926], 00:07:43.659 | 70.00th=[17433], 80.00th=[29230], 90.00th=[59507], 95.00th=[68682], 00:07:43.659 | 99.00th=[83362], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:07:43.659 | 99.99th=[84411] 00:07:43.659 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:07:43.659 slat (nsec): min=1606, max=23237k, avg=94324.37, stdev=689382.52 00:07:43.659 clat (usec): min=2270, max=38931, avg=11751.84, stdev=6354.77 00:07:43.659 lat (usec): min=2273, max=51921, avg=11846.16, stdev=6409.39 00:07:43.659 clat percentiles (usec): 00:07:43.659 | 1.00th=[ 2802], 5.00th=[ 5211], 10.00th=[ 7111], 20.00th=[ 8225], 00:07:43.659 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10814], 00:07:43.659 | 70.00th=[11863], 80.00th=[13566], 90.00th=[18220], 95.00th=[27395], 00:07:43.659 | 99.00th=[36963], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:07:43.659 | 99.99th=[39060] 00:07:43.659 bw ( KiB/s): min=12288, max=20088, per=15.62%, avg=16188.00, stdev=5515.43, samples=2 00:07:43.659 iops : min= 3072, max= 5022, avg=4047.00, stdev=1378.86, samples=2 00:07:43.659 lat (msec) : 4=1.79%, 10=37.07%, 20=43.79%, 50=11.36%, 100=5.99% 00:07:43.659 cpu : usr=1.89%, sys=3.97%, ctx=418, majf=0, minf=1 00:07:43.659 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:07:43.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:43.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:07:43.659 issued rwts: total=3662,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:43.659 latency : target=0, window=0, percentile=100.00%, depth=128 00:07:43.659 00:07:43.659 Run status group 0 (all jobs): 00:07:43.659 READ: bw=96.7MiB/s (101MB/s), 14.2MiB/s-33.8MiB/s (14.9MB/s-35.4MB/s), io=97.5MiB (102MB), run=1003-1008msec 00:07:43.659 WRITE: bw=101MiB/s (106MB/s), 15.9MiB/s-33.8MiB/s (16.6MB/s-35.4MB/s), io=102MiB (107MB), run=1003-1008msec 00:07:43.659 00:07:43.659 Disk stats (read/write): 00:07:43.659 nvme0n1: ios=7218/7567, merge=0/0, ticks=54150/49461, in_queue=103611, util=87.27% 00:07:43.659 nvme0n2: ios=5170/5624, merge=0/0, ticks=43294/48806, in_queue=92100, util=91.03% 00:07:43.659 nvme0n3: ios=5177/5519, merge=0/0, ticks=51362/51597, in_queue=102959, util=96.84% 00:07:43.659 nvme0n4: ios=2602/2887, merge=0/0, ticks=21787/13752, in_queue=35539, util=96.58% 00:07:43.659 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:07:43.659 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3560839 00:07:43.659 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:07:43.659 19:14:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:07:43.959 [global] 00:07:43.959 thread=1 00:07:43.959 invalidate=1 00:07:43.959 rw=read 00:07:43.959 time_based=1 00:07:43.959 runtime=10 00:07:43.959 ioengine=libaio 00:07:43.959 direct=1 00:07:43.959 bs=4096 00:07:43.959 iodepth=1 00:07:43.959 norandommap=1 00:07:43.959 numjobs=1 00:07:43.959 00:07:43.959 [job0] 00:07:43.959 filename=/dev/nvme0n1 00:07:43.959 [job1] 00:07:43.959 filename=/dev/nvme0n2 00:07:43.959 [job2] 00:07:43.959 filename=/dev/nvme0n3 00:07:43.959 [job3] 00:07:43.959 filename=/dev/nvme0n4 00:07:43.959 Could not set queue depth (nvme0n1) 00:07:43.959 Could not set queue depth (nvme0n2) 00:07:43.959 Could not set queue depth (nvme0n3) 00:07:43.959 Could not set queue depth (nvme0n4) 00:07:44.219 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:44.219 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:44.219 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:44.219 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:44.219 fio-3.35 00:07:44.219 Starting 4 threads 00:07:46.763 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:07:47.024 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:07:47.024 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2154496, buflen=4096 00:07:47.024 fio: pid=3561143, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:07:47.024 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=12881920, buflen=4096 00:07:47.024 fio: pid=3561136, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:07:47.024 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:47.024 19:14:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:07:47.283 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:47.283 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:07:47.283 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=14270464, buflen=4096 00:07:47.283 fio: pid=3561108, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:07:47.543 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:47.543 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:07:47.543 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12898304, buflen=4096 00:07:47.543 fio: pid=3561118, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:07:47.543 00:07:47.543 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3561108: Tue Nov 26 19:14:21 2024 00:07:47.543 read: IOPS=1155, BW=4621KiB/s (4732kB/s)(13.6MiB/3016msec) 00:07:47.543 slat (usec): min=3, max=15453, avg=26.63, stdev=382.96 00:07:47.543 clat (usec): min=247, max=41209, avg=828.99, stdev=1372.85 00:07:47.543 lat (usec): min=255, max=41220, avg=855.62, stdev=1427.28 00:07:47.543 clat percentiles (usec): 00:07:47.543 | 1.00th=[ 441], 5.00th=[ 529], 10.00th=[ 578], 20.00th=[ 652], 00:07:47.543 | 30.00th=[ 701], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 816], 00:07:47.543 | 70.00th=[ 857], 80.00th=[ 930], 90.00th=[ 996], 95.00th=[ 1029], 00:07:47.543 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[41157], 99.95th=[41157], 00:07:47.543 | 99.99th=[41157] 00:07:47.543 bw ( KiB/s): min= 4136, max= 5432, per=37.75%, avg=4886.40, stdev=498.78, samples=5 00:07:47.543 iops : min= 1034, max= 1358, avg=1221.60, stdev=124.69, samples=5 00:07:47.543 lat (usec) : 250=0.03%, 500=2.64%, 750=37.70%, 1000=51.08% 00:07:47.543 lat (msec) : 2=8.41%, 50=0.11% 00:07:47.543 cpu : usr=0.70%, sys=1.96%, ctx=3492, majf=0, minf=1 00:07:47.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:47.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.543 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.543 issued rwts: total=3485,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:47.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:47.543 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3561118: Tue Nov 26 19:14:21 2024 00:07:47.543 read: IOPS=989, BW=3956KiB/s (4051kB/s)(12.3MiB/3184msec) 00:07:47.543 slat (usec): min=2, max=22619, avg=41.68, stdev=675.18 00:07:47.543 clat (usec): min=464, max=1330, avg=956.00, stdev=82.67 00:07:47.543 lat (usec): min=476, max=23596, avg=997.69, stdev=679.62 00:07:47.543 clat percentiles (usec): 00:07:47.543 | 1.00th=[ 685], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 906], 00:07:47.543 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 979], 00:07:47.543 | 70.00th=[ 996], 80.00th=[ 1012], 90.00th=[ 1045], 95.00th=[ 1074], 00:07:47.543 | 99.00th=[ 1139], 99.50th=[ 1172], 99.90th=[ 1221], 99.95th=[ 1254], 00:07:47.543 | 99.99th=[ 1336] 00:07:47.543 bw ( KiB/s): min= 3647, max= 4136, per=30.92%, avg=4003.83, stdev=179.76, samples=6 00:07:47.543 iops : min= 911, max= 1034, avg=1000.83, stdev=45.24, samples=6 00:07:47.543 lat (usec) : 500=0.03%, 750=2.10%, 1000=70.86% 00:07:47.543 lat (msec) : 2=26.98% 00:07:47.543 cpu : usr=1.23%, sys=3.11%, ctx=3154, majf=0, minf=2 00:07:47.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:47.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.543 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.543 issued rwts: total=3150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:47.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:47.543 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3561136: Tue Nov 26 19:14:21 2024 00:07:47.543 read: IOPS=1107, BW=4428KiB/s (4534kB/s)(12.3MiB/2841msec) 00:07:47.543 slat (nsec): min=2990, max=56916, avg=14731.77, stdev=7054.24 00:07:47.543 clat (usec): min=219, max=42110, avg=878.44, stdev=1965.64 00:07:47.543 lat (usec): min=230, max=42136, avg=893.17, stdev=1966.15 00:07:47.543 clat percentiles (usec): 00:07:47.543 | 1.00th=[ 433], 5.00th=[ 529], 10.00th=[ 578], 20.00th=[ 644], 00:07:47.543 | 30.00th=[ 693], 40.00th=[ 742], 50.00th=[ 783], 60.00th=[ 816], 00:07:47.543 | 70.00th=[ 857], 80.00th=[ 922], 90.00th=[ 996], 95.00th=[ 1037], 00:07:47.543 | 99.00th=[ 1090], 99.50th=[ 1172], 99.90th=[42206], 99.95th=[42206], 00:07:47.543 | 99.99th=[42206] 00:07:47.543 bw ( KiB/s): min= 2096, max= 5584, per=33.50%, avg=4336.00, stdev=1387.89, samples=5 00:07:47.543 iops : min= 524, max= 1396, avg=1084.00, stdev=346.97, samples=5 00:07:47.543 lat (usec) : 250=0.03%, 500=3.08%, 750=37.95%, 1000=49.75% 00:07:47.543 lat (msec) : 2=8.87%, 10=0.03%, 20=0.03%, 50=0.22% 00:07:47.543 cpu : usr=0.77%, sys=1.62%, ctx=3146, majf=0, minf=2 00:07:47.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:47.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.543 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.543 issued rwts: total=3146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:47.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:47.543 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3561143: Tue Nov 26 19:14:21 2024 00:07:47.543 read: IOPS=195, BW=780KiB/s (799kB/s)(2104KiB/2696msec) 00:07:47.543 slat (nsec): min=3205, max=57899, avg=19136.93, stdev=6078.79 00:07:47.543 clat (usec): min=549, max=42977, avg=5060.91, stdev=12011.87 00:07:47.543 lat (usec): min=552, max=43002, avg=5080.04, stdev=12012.44 00:07:47.543 clat percentiles (usec): 00:07:47.543 | 1.00th=[ 627], 5.00th=[ 816], 10.00th=[ 971], 20.00th=[ 1074], 00:07:47.543 | 30.00th=[ 1106], 40.00th=[ 1139], 50.00th=[ 1172], 60.00th=[ 1188], 00:07:47.543 | 70.00th=[ 1221], 80.00th=[ 1254], 90.00th=[ 1401], 95.00th=[41681], 00:07:47.543 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:07:47.543 | 99.99th=[42730] 00:07:47.543 bw ( KiB/s): min= 96, max= 1608, per=5.96%, avg=771.20, stdev=764.61, samples=5 00:07:47.543 iops : min= 24, max= 402, avg=192.80, stdev=191.15, samples=5 00:07:47.543 lat (usec) : 750=2.66%, 1000=9.11% 00:07:47.543 lat (msec) : 2=78.37%, 50=9.68% 00:07:47.543 cpu : usr=0.19%, sys=0.37%, ctx=527, majf=0, minf=2 00:07:47.543 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:47.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.543 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:47.543 issued rwts: total=527,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:47.543 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:47.543 00:07:47.543 Run status group 0 (all jobs): 00:07:47.543 READ: bw=12.6MiB/s (13.3MB/s), 780KiB/s-4621KiB/s (799kB/s-4732kB/s), io=40.2MiB (42.2MB), run=2696-3184msec 00:07:47.543 00:07:47.543 Disk stats (read/write): 00:07:47.543 nvme0n1: ios=3488/0, merge=0/0, ticks=3226/0, in_queue=3226, util=98.60% 00:07:47.543 nvme0n2: ios=3096/0, merge=0/0, ticks=2783/0, in_queue=2783, util=93.84% 00:07:47.543 nvme0n3: ios=2844/0, merge=0/0, ticks=2537/0, in_queue=2537, util=96.16% 00:07:47.543 nvme0n4: ios=504/0, merge=0/0, ticks=2537/0, in_queue=2537, util=96.45% 00:07:47.543 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:47.544 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:07:47.803 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:47.803 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:07:47.803 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:47.803 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:07:48.062 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:07:48.062 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:07:48.320 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:07:48.320 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3560839 00:07:48.320 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:07:48.320 19:14:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:48.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:48.320 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:48.320 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:07:48.320 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.321 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:07:48.321 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:07:48.321 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:48.321 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:07:48.321 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:07:48.321 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:07:48.321 nvmf hotplug test: fio failed as expected 00:07:48.321 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:48.579 rmmod nvme_tcp 00:07:48.579 rmmod nvme_fabrics 00:07:48.579 rmmod nvme_keyring 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3557324 ']' 00:07:48.579 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3557324 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3557324 ']' 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3557324 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3557324 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3557324' 00:07:48.580 killing process with pid 3557324 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3557324 00:07:48.580 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3557324 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.839 19:14:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:50.757 00:07:50.757 real 0m25.699s 00:07:50.757 user 2m14.346s 00:07:50.757 sys 0m7.129s 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:07:50.757 ************************************ 00:07:50.757 END TEST nvmf_fio_target 00:07:50.757 ************************************ 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.757 ************************************ 00:07:50.757 START TEST nvmf_bdevio 00:07:50.757 ************************************ 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:07:50.757 * Looking for test storage... 00:07:50.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.757 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.016 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.016 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.016 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.016 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.016 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.016 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.017 --rc genhtml_branch_coverage=1 00:07:51.017 --rc genhtml_function_coverage=1 00:07:51.017 --rc genhtml_legend=1 00:07:51.017 --rc geninfo_all_blocks=1 00:07:51.017 --rc geninfo_unexecuted_blocks=1 00:07:51.017 00:07:51.017 ' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.017 --rc genhtml_branch_coverage=1 00:07:51.017 --rc genhtml_function_coverage=1 00:07:51.017 --rc genhtml_legend=1 00:07:51.017 --rc geninfo_all_blocks=1 00:07:51.017 --rc geninfo_unexecuted_blocks=1 00:07:51.017 00:07:51.017 ' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.017 --rc genhtml_branch_coverage=1 00:07:51.017 --rc genhtml_function_coverage=1 00:07:51.017 --rc genhtml_legend=1 00:07:51.017 --rc geninfo_all_blocks=1 00:07:51.017 --rc geninfo_unexecuted_blocks=1 00:07:51.017 00:07:51.017 ' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.017 --rc genhtml_branch_coverage=1 00:07:51.017 --rc genhtml_function_coverage=1 00:07:51.017 --rc genhtml_legend=1 00:07:51.017 --rc geninfo_all_blocks=1 00:07:51.017 --rc geninfo_unexecuted_blocks=1 00:07:51.017 00:07:51.017 ' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:51.017 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:51.018 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:07:51.018 19:14:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:56.295 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.295 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:56.296 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:56.296 Found net devices under 0000:31:00.0: cvl_0_0 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:56.296 Found net devices under 0000:31:00.1: cvl_0_1 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:56.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:07:56.296 00:07:56.296 --- 10.0.0.2 ping statistics --- 00:07:56.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.296 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:07:56.296 00:07:56.296 --- 10.0.0.1 ping statistics --- 00:07:56.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.296 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3566552 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3566552 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3566552 ']' 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:07:56.296 19:14:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:56.296 [2024-11-26 19:14:30.037041] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:07:56.296 [2024-11-26 19:14:30.037094] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.296 [2024-11-26 19:14:30.110752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.297 [2024-11-26 19:14:30.142184] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.297 [2024-11-26 19:14:30.142215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.297 [2024-11-26 19:14:30.142221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.297 [2024-11-26 19:14:30.142225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.297 [2024-11-26 19:14:30.142229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.297 [2024-11-26 19:14:30.143792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:56.297 [2024-11-26 19:14:30.143944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:56.297 [2024-11-26 19:14:30.144074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.297 [2024-11-26 19:14:30.144076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:57.235 [2024-11-26 19:14:30.849867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:57.235 Malloc0 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.235 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:57.236 [2024-11-26 19:14:30.906399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:57.236 { 00:07:57.236 "params": { 00:07:57.236 "name": "Nvme$subsystem", 00:07:57.236 "trtype": "$TEST_TRANSPORT", 00:07:57.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.236 "adrfam": "ipv4", 00:07:57.236 "trsvcid": "$NVMF_PORT", 00:07:57.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.236 "hdgst": ${hdgst:-false}, 00:07:57.236 "ddgst": ${ddgst:-false} 00:07:57.236 }, 00:07:57.236 "method": "bdev_nvme_attach_controller" 00:07:57.236 } 00:07:57.236 EOF 00:07:57.236 )") 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:07:57.236 19:14:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:57.236 "params": { 00:07:57.236 "name": "Nvme1", 00:07:57.236 "trtype": "tcp", 00:07:57.236 "traddr": "10.0.0.2", 00:07:57.236 "adrfam": "ipv4", 00:07:57.236 "trsvcid": "4420", 00:07:57.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:57.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:57.236 "hdgst": false, 00:07:57.236 "ddgst": false 00:07:57.236 }, 00:07:57.236 "method": "bdev_nvme_attach_controller" 00:07:57.236 }' 00:07:57.236 [2024-11-26 19:14:30.944033] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:07:57.236 [2024-11-26 19:14:30.944084] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3566798 ] 00:07:57.236 [2024-11-26 19:14:31.022664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.236 [2024-11-26 19:14:31.061647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.236 [2024-11-26 19:14:31.061807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.236 [2024-11-26 19:14:31.061807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.495 I/O targets: 00:07:57.495 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:07:57.495 00:07:57.495 00:07:57.495 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.495 http://cunit.sourceforge.net/ 00:07:57.495 00:07:57.495 00:07:57.495 Suite: bdevio tests on: Nvme1n1 00:07:57.495 Test: blockdev write read block ...passed 00:07:57.495 Test: blockdev write zeroes read block ...passed 00:07:57.495 Test: blockdev write zeroes read no split ...passed 00:07:57.756 Test: blockdev write zeroes read split ...passed 00:07:57.756 Test: blockdev write zeroes read split partial ...passed 00:07:57.756 Test: blockdev reset ...[2024-11-26 19:14:31.410840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:07:57.756 [2024-11-26 19:14:31.410912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cd4b0 (9): Bad file descriptor 00:07:57.756 [2024-11-26 19:14:31.558455] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:07:57.756 passed 00:07:57.756 Test: blockdev write read 8 blocks ...passed 00:07:58.015 Test: blockdev write read size > 128k ...passed 00:07:58.015 Test: blockdev write read invalid size ...passed 00:07:58.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:58.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:58.015 Test: blockdev write read max offset ...passed 00:07:58.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:58.015 Test: blockdev writev readv 8 blocks ...passed 00:07:58.015 Test: blockdev writev readv 30 x 1block ...passed 00:07:58.015 Test: blockdev writev readv block ...passed 00:07:58.015 Test: blockdev writev readv size > 128k ...passed 00:07:58.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:58.015 Test: blockdev comparev and writev ...[2024-11-26 19:14:31.819486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:07:58.015 [2024-11-26 19:14:31.819513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:07:58.015 [2024-11-26 19:14:31.819524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:07:58.015 [2024-11-26 19:14:31.819530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:07:58.015 [2024-11-26 19:14:31.819825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:07:58.015 [2024-11-26 19:14:31.819834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:07:58.015 [2024-11-26 19:14:31.819844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:07:58.015 [2024-11-26 19:14:31.819849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:07:58.015 [2024-11-26 19:14:31.820150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:07:58.015 [2024-11-26 19:14:31.820158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:07:58.015 [2024-11-26 19:14:31.820168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:07:58.015 [2024-11-26 19:14:31.820173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:07:58.016 [2024-11-26 19:14:31.820481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:07:58.016 [2024-11-26 19:14:31.820490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:07:58.016 [2024-11-26 19:14:31.820500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:07:58.016 [2024-11-26 19:14:31.820505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:07:58.016 passed 00:07:58.275 Test: blockdev nvme passthru rw ...passed 00:07:58.275 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:14:31.902622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:07:58.275 [2024-11-26 19:14:31.902634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:07:58.275 [2024-11-26 19:14:31.902826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:07:58.275 [2024-11-26 19:14:31.902835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:07:58.275 [2024-11-26 19:14:31.903110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:07:58.275 [2024-11-26 19:14:31.903119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:07:58.275 [2024-11-26 19:14:31.903348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:07:58.275 [2024-11-26 19:14:31.903356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:07:58.275 passed 00:07:58.275 Test: blockdev nvme admin passthru ...passed 00:07:58.275 Test: blockdev copy ...passed 00:07:58.275 00:07:58.275 Run Summary: Type Total Ran Passed Failed Inactive 00:07:58.275 suites 1 1 n/a 0 0 00:07:58.275 tests 23 23 23 0 0 00:07:58.275 asserts 152 152 152 0 n/a 00:07:58.275 00:07:58.275 Elapsed time = 1.429 seconds 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:58.275 rmmod nvme_tcp 00:07:58.275 rmmod nvme_fabrics 00:07:58.275 rmmod nvme_keyring 00:07:58.275 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3566552 ']' 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3566552 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3566552 ']' 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3566552 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.276 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3566552 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3566552' 00:07:58.535 killing process with pid 3566552 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3566552 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3566552 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.535 19:14:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.071 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:01.071 00:08:01.071 real 0m9.791s 00:08:01.071 user 0m12.310s 00:08:01.071 sys 0m4.440s 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:01.072 ************************************ 00:08:01.072 END TEST nvmf_bdevio 00:08:01.072 ************************************ 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:01.072 00:08:01.072 real 4m27.936s 00:08:01.072 user 10m55.042s 00:08:01.072 sys 1m27.977s 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:01.072 ************************************ 00:08:01.072 END TEST nvmf_target_core 00:08:01.072 ************************************ 00:08:01.072 19:14:34 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:01.072 19:14:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.072 19:14:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.072 19:14:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.072 ************************************ 00:08:01.072 START TEST nvmf_target_extra 00:08:01.072 ************************************ 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:08:01.072 * Looking for test storage... 00:08:01.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.072 --rc genhtml_branch_coverage=1 00:08:01.072 --rc genhtml_function_coverage=1 00:08:01.072 --rc genhtml_legend=1 00:08:01.072 --rc geninfo_all_blocks=1 00:08:01.072 --rc geninfo_unexecuted_blocks=1 00:08:01.072 00:08:01.072 ' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.072 --rc genhtml_branch_coverage=1 00:08:01.072 --rc genhtml_function_coverage=1 00:08:01.072 --rc genhtml_legend=1 00:08:01.072 --rc geninfo_all_blocks=1 00:08:01.072 --rc geninfo_unexecuted_blocks=1 00:08:01.072 00:08:01.072 ' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.072 --rc genhtml_branch_coverage=1 00:08:01.072 --rc genhtml_function_coverage=1 00:08:01.072 --rc genhtml_legend=1 00:08:01.072 --rc geninfo_all_blocks=1 00:08:01.072 --rc geninfo_unexecuted_blocks=1 00:08:01.072 00:08:01.072 ' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.072 --rc genhtml_branch_coverage=1 00:08:01.072 --rc genhtml_function_coverage=1 00:08:01.072 --rc genhtml_legend=1 00:08:01.072 --rc geninfo_all_blocks=1 00:08:01.072 --rc geninfo_unexecuted_blocks=1 00:08:01.072 00:08:01.072 ' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:01.072 ************************************ 00:08:01.072 START TEST nvmf_example 00:08:01.072 ************************************ 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.072 * Looking for test storage... 00:08:01.072 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.072 --rc genhtml_branch_coverage=1 00:08:01.072 --rc genhtml_function_coverage=1 00:08:01.072 --rc genhtml_legend=1 00:08:01.072 --rc geninfo_all_blocks=1 00:08:01.072 --rc geninfo_unexecuted_blocks=1 00:08:01.072 00:08:01.072 ' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.072 --rc genhtml_branch_coverage=1 00:08:01.072 --rc genhtml_function_coverage=1 00:08:01.072 --rc genhtml_legend=1 00:08:01.072 --rc geninfo_all_blocks=1 00:08:01.072 --rc geninfo_unexecuted_blocks=1 00:08:01.072 00:08:01.072 ' 00:08:01.072 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:01.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.072 --rc genhtml_branch_coverage=1 00:08:01.072 --rc genhtml_function_coverage=1 00:08:01.072 --rc genhtml_legend=1 00:08:01.072 --rc geninfo_all_blocks=1 00:08:01.072 --rc geninfo_unexecuted_blocks=1 00:08:01.072 00:08:01.072 ' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:01.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.073 --rc genhtml_branch_coverage=1 00:08:01.073 --rc genhtml_function_coverage=1 00:08:01.073 --rc genhtml_legend=1 00:08:01.073 --rc geninfo_all_blocks=1 00:08:01.073 --rc geninfo_unexecuted_blocks=1 00:08:01.073 00:08:01.073 ' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:01.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:08:01.073 19:14:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:08:07.640 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:07.641 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:07.641 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:07.641 Found net devices under 0000:31:00.0: cvl_0_0 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:07.641 Found net devices under 0000:31:00.1: cvl_0_1 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:07.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:08:07.641 00:08:07.641 --- 10.0.0.2 ping statistics --- 00:08:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.641 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:07.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:08:07.641 00:08:07.641 --- 10.0.0.1 ping statistics --- 00:08:07.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.641 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.641 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3571582 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3571582 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3571582 ']' 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.642 19:14:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:07.901 19:14:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:20.117 Initializing NVMe Controllers 00:08:20.117 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:20.117 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:20.117 Initialization complete. Launching workers. 00:08:20.117 ======================================================== 00:08:20.117 Latency(us) 00:08:20.117 Device Information : IOPS MiB/s Average min max 00:08:20.117 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19497.19 76.16 3282.30 625.30 15457.75 00:08:20.117 ======================================================== 00:08:20.117 Total : 19497.19 76.16 3282.30 625.30 15457.75 00:08:20.117 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:20.117 rmmod nvme_tcp 00:08:20.117 rmmod nvme_fabrics 00:08:20.117 rmmod nvme_keyring 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3571582 ']' 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3571582 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3571582 ']' 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3571582 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:08:20.117 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.118 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3571582 00:08:20.118 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:08:20.118 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:08:20.118 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3571582' 00:08:20.118 killing process with pid 3571582 00:08:20.118 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3571582 00:08:20.118 19:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3571582 00:08:20.118 nvmf threads initialize successfully 00:08:20.118 bdev subsystem init successfully 00:08:20.118 created a nvmf target service 00:08:20.118 create targets's poll groups done 00:08:20.118 all subsystems of target started 00:08:20.118 nvmf target is running 00:08:20.118 all subsystems of target stopped 00:08:20.118 destroy targets's poll groups done 00:08:20.118 destroyed the nvmf target service 00:08:20.118 bdev subsystem finish successfully 00:08:20.118 nvmf threads destroy successfully 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.118 19:14:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:20.376 00:08:20.376 real 0m19.525s 00:08:20.376 user 0m45.704s 00:08:20.376 sys 0m5.613s 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:20.376 ************************************ 00:08:20.376 END TEST nvmf_example 00:08:20.376 ************************************ 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:20.376 ************************************ 00:08:20.376 START TEST nvmf_filesystem 00:08:20.376 ************************************ 00:08:20.376 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:20.376 * Looking for test storage... 00:08:20.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.642 --rc genhtml_branch_coverage=1 00:08:20.642 --rc genhtml_function_coverage=1 00:08:20.642 --rc genhtml_legend=1 00:08:20.642 --rc geninfo_all_blocks=1 00:08:20.642 --rc geninfo_unexecuted_blocks=1 00:08:20.642 00:08:20.642 ' 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.642 --rc genhtml_branch_coverage=1 00:08:20.642 --rc genhtml_function_coverage=1 00:08:20.642 --rc genhtml_legend=1 00:08:20.642 --rc geninfo_all_blocks=1 00:08:20.642 --rc geninfo_unexecuted_blocks=1 00:08:20.642 00:08:20.642 ' 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.642 --rc genhtml_branch_coverage=1 00:08:20.642 --rc genhtml_function_coverage=1 00:08:20.642 --rc genhtml_legend=1 00:08:20.642 --rc geninfo_all_blocks=1 00:08:20.642 --rc geninfo_unexecuted_blocks=1 00:08:20.642 00:08:20.642 ' 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.642 --rc genhtml_branch_coverage=1 00:08:20.642 --rc genhtml_function_coverage=1 00:08:20.642 --rc genhtml_legend=1 00:08:20.642 --rc geninfo_all_blocks=1 00:08:20.642 --rc geninfo_unexecuted_blocks=1 00:08:20.642 00:08:20.642 ' 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:08:20.642 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:20.643 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:20.643 #define SPDK_CONFIG_H 00:08:20.643 #define SPDK_CONFIG_AIO_FSDEV 1 00:08:20.643 #define SPDK_CONFIG_APPS 1 00:08:20.643 #define SPDK_CONFIG_ARCH native 00:08:20.643 #undef SPDK_CONFIG_ASAN 00:08:20.643 #undef SPDK_CONFIG_AVAHI 00:08:20.643 #undef SPDK_CONFIG_CET 00:08:20.643 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:08:20.643 #define SPDK_CONFIG_COVERAGE 1 00:08:20.643 #define SPDK_CONFIG_CROSS_PREFIX 00:08:20.643 #undef SPDK_CONFIG_CRYPTO 00:08:20.643 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:20.643 #undef SPDK_CONFIG_CUSTOMOCF 00:08:20.643 #undef SPDK_CONFIG_DAOS 00:08:20.643 #define SPDK_CONFIG_DAOS_DIR 00:08:20.643 #define SPDK_CONFIG_DEBUG 1 00:08:20.643 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:20.643 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:20.643 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:20.643 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:20.643 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:20.643 #undef SPDK_CONFIG_DPDK_UADK 00:08:20.643 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:20.643 #define SPDK_CONFIG_EXAMPLES 1 00:08:20.643 #undef SPDK_CONFIG_FC 00:08:20.643 #define SPDK_CONFIG_FC_PATH 00:08:20.643 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:20.643 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:20.643 #define SPDK_CONFIG_FSDEV 1 00:08:20.643 #undef SPDK_CONFIG_FUSE 00:08:20.643 #undef SPDK_CONFIG_FUZZER 00:08:20.643 #define SPDK_CONFIG_FUZZER_LIB 00:08:20.643 #undef SPDK_CONFIG_GOLANG 00:08:20.643 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:20.643 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:20.643 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:20.643 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:20.643 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:20.643 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:20.643 #undef SPDK_CONFIG_HAVE_LZ4 00:08:20.643 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:08:20.643 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:08:20.643 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:20.643 #define SPDK_CONFIG_IDXD 1 00:08:20.643 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:20.643 #undef SPDK_CONFIG_IPSEC_MB 00:08:20.643 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:20.643 #define SPDK_CONFIG_ISAL 1 00:08:20.643 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:20.643 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:20.643 #define SPDK_CONFIG_LIBDIR 00:08:20.643 #undef SPDK_CONFIG_LTO 00:08:20.643 #define SPDK_CONFIG_MAX_LCORES 128 00:08:20.643 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:08:20.643 #define SPDK_CONFIG_NVME_CUSE 1 00:08:20.643 #undef SPDK_CONFIG_OCF 00:08:20.643 #define SPDK_CONFIG_OCF_PATH 00:08:20.643 #define SPDK_CONFIG_OPENSSL_PATH 00:08:20.643 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:20.643 #define SPDK_CONFIG_PGO_DIR 00:08:20.643 #undef SPDK_CONFIG_PGO_USE 00:08:20.643 #define SPDK_CONFIG_PREFIX /usr/local 00:08:20.643 #undef SPDK_CONFIG_RAID5F 00:08:20.643 #undef SPDK_CONFIG_RBD 00:08:20.643 #define SPDK_CONFIG_RDMA 1 00:08:20.643 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:20.643 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:20.644 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:20.644 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:20.644 #define SPDK_CONFIG_SHARED 1 00:08:20.644 #undef SPDK_CONFIG_SMA 00:08:20.644 #define SPDK_CONFIG_TESTS 1 00:08:20.644 #undef SPDK_CONFIG_TSAN 00:08:20.644 #define SPDK_CONFIG_UBLK 1 00:08:20.644 #define SPDK_CONFIG_UBSAN 1 00:08:20.644 #undef SPDK_CONFIG_UNIT_TESTS 00:08:20.644 #undef SPDK_CONFIG_URING 00:08:20.644 #define SPDK_CONFIG_URING_PATH 00:08:20.644 #undef SPDK_CONFIG_URING_ZNS 00:08:20.644 #undef SPDK_CONFIG_USDT 00:08:20.644 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:20.644 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:20.644 #define SPDK_CONFIG_VFIO_USER 1 00:08:20.644 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:20.644 #define SPDK_CONFIG_VHOST 1 00:08:20.644 #define SPDK_CONFIG_VIRTIO 1 00:08:20.644 #undef SPDK_CONFIG_VTUNE 00:08:20.644 #define SPDK_CONFIG_VTUNE_DIR 00:08:20.644 #define SPDK_CONFIG_WERROR 1 00:08:20.644 #define SPDK_CONFIG_WPDK_DIR 00:08:20.644 #undef SPDK_CONFIG_XNVME 00:08:20.644 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:20.644 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:20.645 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3574692 ]] 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3574692 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Wqez8r 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Wqez8r/tests/target /tmp/spdk.Wqez8r 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:08:20.646 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=123400339456 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356533760 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5956194304 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668233728 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847713792 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23592960 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=349184 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=154624 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677933056 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678268928 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=335872 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:08:20.647 * Looking for test storage... 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=123400339456 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8170786816 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:20.647 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:20.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.648 --rc genhtml_branch_coverage=1 00:08:20.648 --rc genhtml_function_coverage=1 00:08:20.648 --rc genhtml_legend=1 00:08:20.648 --rc geninfo_all_blocks=1 00:08:20.648 --rc geninfo_unexecuted_blocks=1 00:08:20.648 00:08:20.648 ' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:20.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.648 --rc genhtml_branch_coverage=1 00:08:20.648 --rc genhtml_function_coverage=1 00:08:20.648 --rc genhtml_legend=1 00:08:20.648 --rc geninfo_all_blocks=1 00:08:20.648 --rc geninfo_unexecuted_blocks=1 00:08:20.648 00:08:20.648 ' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:20.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.648 --rc genhtml_branch_coverage=1 00:08:20.648 --rc genhtml_function_coverage=1 00:08:20.648 --rc genhtml_legend=1 00:08:20.648 --rc geninfo_all_blocks=1 00:08:20.648 --rc geninfo_unexecuted_blocks=1 00:08:20.648 00:08:20.648 ' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:20.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:20.648 --rc genhtml_branch_coverage=1 00:08:20.648 --rc genhtml_function_coverage=1 00:08:20.648 --rc genhtml_legend=1 00:08:20.648 --rc geninfo_all_blocks=1 00:08:20.648 --rc geninfo_unexecuted_blocks=1 00:08:20.648 00:08:20.648 ' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:20.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:20.648 19:14:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:25.923 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:25.923 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.923 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:25.924 Found net devices under 0000:31:00.0: cvl_0_0 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:25.924 Found net devices under 0000:31:00.1: cvl_0_1 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:25.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:08:25.924 00:08:25.924 --- 10.0.0.2 ping statistics --- 00:08:25.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.924 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:08:25.924 00:08:25.924 --- 10.0.0.1 ping statistics --- 00:08:25.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.924 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:25.924 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.184 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:26.184 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.184 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.184 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.184 ************************************ 00:08:26.185 START TEST nvmf_filesystem_no_in_capsule 00:08:26.185 ************************************ 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3578613 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3578613 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3578613 ']' 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.185 19:14:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:26.185 [2024-11-26 19:14:59.876972] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:08:26.185 [2024-11-26 19:14:59.877017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.185 [2024-11-26 19:14:59.961785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.185 [2024-11-26 19:15:00.000342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.185 [2024-11-26 19:15:00.000381] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.185 [2024-11-26 19:15:00.000390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.185 [2024-11-26 19:15:00.000403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.185 [2024-11-26 19:15:00.000409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.185 [2024-11-26 19:15:00.002014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.185 [2024-11-26 19:15:00.002176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.185 [2024-11-26 19:15:00.002224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.185 [2024-11-26 19:15:00.002226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.122 [2024-11-26 19:15:00.684335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.122 Malloc1 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.122 [2024-11-26 19:15:00.811740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.122 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:08:27.122 { 00:08:27.122 "name": "Malloc1", 00:08:27.122 "aliases": [ 00:08:27.122 "3aee3289-469c-4058-8f24-0d83eb202b08" 00:08:27.122 ], 00:08:27.122 "product_name": "Malloc disk", 00:08:27.122 "block_size": 512, 00:08:27.122 "num_blocks": 1048576, 00:08:27.122 "uuid": "3aee3289-469c-4058-8f24-0d83eb202b08", 00:08:27.122 "assigned_rate_limits": { 00:08:27.122 "rw_ios_per_sec": 0, 00:08:27.122 "rw_mbytes_per_sec": 0, 00:08:27.122 "r_mbytes_per_sec": 0, 00:08:27.122 "w_mbytes_per_sec": 0 00:08:27.122 }, 00:08:27.122 "claimed": true, 00:08:27.122 "claim_type": "exclusive_write", 00:08:27.122 "zoned": false, 00:08:27.122 "supported_io_types": { 00:08:27.122 "read": true, 00:08:27.122 "write": true, 00:08:27.122 "unmap": true, 00:08:27.122 "flush": true, 00:08:27.122 "reset": true, 00:08:27.122 "nvme_admin": false, 00:08:27.122 "nvme_io": false, 00:08:27.122 "nvme_io_md": false, 00:08:27.122 "write_zeroes": true, 00:08:27.122 "zcopy": true, 00:08:27.122 "get_zone_info": false, 00:08:27.122 "zone_management": false, 00:08:27.122 "zone_append": false, 00:08:27.122 "compare": false, 00:08:27.122 "compare_and_write": false, 00:08:27.122 "abort": true, 00:08:27.122 "seek_hole": false, 00:08:27.122 "seek_data": false, 00:08:27.122 "copy": true, 00:08:27.122 "nvme_iov_md": false 00:08:27.122 }, 00:08:27.122 "memory_domains": [ 00:08:27.122 { 00:08:27.122 "dma_device_id": "system", 00:08:27.122 "dma_device_type": 1 00:08:27.122 }, 00:08:27.122 { 00:08:27.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.123 "dma_device_type": 2 00:08:27.123 } 00:08:27.123 ], 00:08:27.123 "driver_specific": {} 00:08:27.123 } 00:08:27.123 ]' 00:08:27.123 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:08:27.123 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:08:27.123 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:08:27.123 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:08:27.123 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:08:27.123 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:08:27.123 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:27.123 19:15:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:28.498 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:28.498 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:08:28.498 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:28.498 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:28.498 19:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:31.030 19:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:31.966 19:15:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.904 ************************************ 00:08:32.904 START TEST filesystem_ext4 00:08:32.904 ************************************ 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:08:32.904 19:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:32.904 mke2fs 1.47.0 (5-Feb-2023) 00:08:32.904 Discarding device blocks: 0/522240 done 00:08:32.904 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:32.904 Filesystem UUID: 020bff49-1698-4acd-88b2-6d965a8696a4 00:08:32.904 Superblock backups stored on blocks: 00:08:32.904 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:32.904 00:08:32.904 Allocating group tables: 0/64 done 00:08:32.904 Writing inode tables: 0/64 done 00:08:34.281 Creating journal (8192 blocks): done 00:08:36.220 Writing superblocks and filesystem accounting information: 0/64 done 00:08:36.220 00:08:36.220 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:08:36.220 19:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3578613 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.781 00:08:42.781 real 0m8.936s 00:08:42.781 user 0m0.013s 00:08:42.781 sys 0m0.068s 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:42.781 ************************************ 00:08:42.781 END TEST filesystem_ext4 00:08:42.781 ************************************ 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.781 ************************************ 00:08:42.781 START TEST filesystem_btrfs 00:08:42.781 ************************************ 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:08:42.781 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:42.781 btrfs-progs v6.8.1 00:08:42.781 See https://btrfs.readthedocs.io for more information. 00:08:42.781 00:08:42.781 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:42.781 NOTE: several default settings have changed in version 5.15, please make sure 00:08:42.781 this does not affect your deployments: 00:08:42.781 - DUP for metadata (-m dup) 00:08:42.781 - enabled no-holes (-O no-holes) 00:08:42.781 - enabled free-space-tree (-R free-space-tree) 00:08:42.781 00:08:42.781 Label: (null) 00:08:42.781 UUID: 939d5f34-4600-40a4-a6e9-6a9eb3c395b0 00:08:42.781 Node size: 16384 00:08:42.781 Sector size: 4096 (CPU page size: 4096) 00:08:42.781 Filesystem size: 510.00MiB 00:08:42.781 Block group profiles: 00:08:42.781 Data: single 8.00MiB 00:08:42.781 Metadata: DUP 32.00MiB 00:08:42.781 System: DUP 8.00MiB 00:08:42.781 SSD detected: yes 00:08:42.782 Zoned device: no 00:08:42.782 Features: extref, skinny-metadata, no-holes, free-space-tree 00:08:42.782 Checksum: crc32c 00:08:42.782 Number of devices: 1 00:08:42.782 Devices: 00:08:42.782 ID SIZE PATH 00:08:42.782 1 510.00MiB /dev/nvme0n1p1 00:08:42.782 00:08:42.782 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:08:42.782 19:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3578613 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.782 00:08:42.782 real 0m0.482s 00:08:42.782 user 0m0.016s 00:08:42.782 sys 0m0.095s 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:42.782 ************************************ 00:08:42.782 END TEST filesystem_btrfs 00:08:42.782 ************************************ 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:42.782 ************************************ 00:08:42.782 START TEST filesystem_xfs 00:08:42.782 ************************************ 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:08:42.782 19:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:42.782 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:42.782 = sectsz=512 attr=2, projid32bit=1 00:08:42.782 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:42.782 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:42.782 data = bsize=4096 blocks=130560, imaxpct=25 00:08:42.782 = sunit=0 swidth=0 blks 00:08:42.782 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:42.782 log =internal log bsize=4096 blocks=16384, version=2 00:08:42.782 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:42.782 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:43.720 Discarding blocks...Done. 00:08:43.720 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:08:43.720 19:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3578613 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:46.254 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:46.254 00:08:46.254 real 0m3.728s 00:08:46.254 user 0m0.022s 00:08:46.255 sys 0m0.054s 00:08:46.255 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.255 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:46.255 ************************************ 00:08:46.255 END TEST filesystem_xfs 00:08:46.255 ************************************ 00:08:46.255 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:46.255 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:46.255 19:15:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3578613 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3578613 ']' 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3578613 00:08:46.255 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3578613 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3578613' 00:08:46.514 killing process with pid 3578613 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3578613 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3578613 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:46.514 00:08:46.514 real 0m20.525s 00:08:46.514 user 1m21.098s 00:08:46.514 sys 0m1.193s 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.514 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.514 ************************************ 00:08:46.514 END TEST nvmf_filesystem_no_in_capsule 00:08:46.514 ************************************ 00:08:46.773 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:46.773 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.773 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.773 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:46.773 ************************************ 00:08:46.773 START TEST nvmf_filesystem_in_capsule 00:08:46.773 ************************************ 00:08:46.773 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:08:46.773 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:46.773 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3584465 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3584465 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3584465 ']' 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:46.774 19:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.774 [2024-11-26 19:15:20.450263] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:08:46.774 [2024-11-26 19:15:20.450309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.774 [2024-11-26 19:15:20.522672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.774 [2024-11-26 19:15:20.553240] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.774 [2024-11-26 19:15:20.553269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.774 [2024-11-26 19:15:20.553275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.774 [2024-11-26 19:15:20.553280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.774 [2024-11-26 19:15:20.553284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.774 [2024-11-26 19:15:20.554644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.774 [2024-11-26 19:15:20.554795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.774 [2024-11-26 19:15:20.554944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.774 [2024-11-26 19:15:20.554946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.711 [2024-11-26 19:15:21.257156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.711 Malloc1 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.711 [2024-11-26 19:15:21.374401] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:08:47.711 { 00:08:47.711 "name": "Malloc1", 00:08:47.711 "aliases": [ 00:08:47.711 "d94b5d68-487e-4957-8040-338d1c6d0d92" 00:08:47.711 ], 00:08:47.711 "product_name": "Malloc disk", 00:08:47.711 "block_size": 512, 00:08:47.711 "num_blocks": 1048576, 00:08:47.711 "uuid": "d94b5d68-487e-4957-8040-338d1c6d0d92", 00:08:47.711 "assigned_rate_limits": { 00:08:47.711 "rw_ios_per_sec": 0, 00:08:47.711 "rw_mbytes_per_sec": 0, 00:08:47.711 "r_mbytes_per_sec": 0, 00:08:47.711 "w_mbytes_per_sec": 0 00:08:47.711 }, 00:08:47.711 "claimed": true, 00:08:47.711 "claim_type": "exclusive_write", 00:08:47.711 "zoned": false, 00:08:47.711 "supported_io_types": { 00:08:47.711 "read": true, 00:08:47.711 "write": true, 00:08:47.711 "unmap": true, 00:08:47.711 "flush": true, 00:08:47.711 "reset": true, 00:08:47.711 "nvme_admin": false, 00:08:47.711 "nvme_io": false, 00:08:47.711 "nvme_io_md": false, 00:08:47.711 "write_zeroes": true, 00:08:47.711 "zcopy": true, 00:08:47.711 "get_zone_info": false, 00:08:47.711 "zone_management": false, 00:08:47.711 "zone_append": false, 00:08:47.711 "compare": false, 00:08:47.711 "compare_and_write": false, 00:08:47.711 "abort": true, 00:08:47.711 "seek_hole": false, 00:08:47.711 "seek_data": false, 00:08:47.711 "copy": true, 00:08:47.711 "nvme_iov_md": false 00:08:47.711 }, 00:08:47.711 "memory_domains": [ 00:08:47.711 { 00:08:47.711 "dma_device_id": "system", 00:08:47.711 "dma_device_type": 1 00:08:47.711 }, 00:08:47.711 { 00:08:47.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.711 "dma_device_type": 2 00:08:47.711 } 00:08:47.711 ], 00:08:47.711 "driver_specific": {} 00:08:47.711 } 00:08:47.711 ]' 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:08:47.711 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:08:47.712 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:08:47.712 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:47.712 19:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:49.619 19:15:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:49.619 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:08:49.619 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:49.619 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:49.619 19:15:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:08:51.202 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:51.202 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:51.203 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:51.461 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:52.030 19:15:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:52.966 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:52.966 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:52.967 ************************************ 00:08:52.967 START TEST filesystem_in_capsule_ext4 00:08:52.967 ************************************ 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:08:52.967 19:15:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:52.967 mke2fs 1.47.0 (5-Feb-2023) 00:08:52.967 Discarding device blocks: 0/522240 done 00:08:52.967 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:52.967 Filesystem UUID: 4b1802e8-4e09-4e90-b533-9033226b1404 00:08:52.967 Superblock backups stored on blocks: 00:08:52.967 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:52.967 00:08:52.967 Allocating group tables: 0/64 done 00:08:52.967 Writing inode tables: 0/64 done 00:08:54.344 Creating journal (8192 blocks): done 00:08:54.344 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:08:54.344 00:08:54.344 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:08:54.344 19:15:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:59.613 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3584465 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:59.872 00:08:59.872 real 0m6.882s 00:08:59.872 user 0m0.010s 00:08:59.872 sys 0m0.062s 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:59.872 ************************************ 00:08:59.872 END TEST filesystem_in_capsule_ext4 00:08:59.872 ************************************ 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:59.872 ************************************ 00:08:59.872 START TEST filesystem_in_capsule_btrfs 00:08:59.872 ************************************ 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:08:59.872 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:00.132 btrfs-progs v6.8.1 00:09:00.132 See https://btrfs.readthedocs.io for more information. 00:09:00.132 00:09:00.132 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:00.132 NOTE: several default settings have changed in version 5.15, please make sure 00:09:00.132 this does not affect your deployments: 00:09:00.132 - DUP for metadata (-m dup) 00:09:00.132 - enabled no-holes (-O no-holes) 00:09:00.132 - enabled free-space-tree (-R free-space-tree) 00:09:00.132 00:09:00.132 Label: (null) 00:09:00.132 UUID: a407e255-5a20-47b1-9fd7-a8af408b82e8 00:09:00.132 Node size: 16384 00:09:00.132 Sector size: 4096 (CPU page size: 4096) 00:09:00.132 Filesystem size: 510.00MiB 00:09:00.132 Block group profiles: 00:09:00.132 Data: single 8.00MiB 00:09:00.132 Metadata: DUP 32.00MiB 00:09:00.132 System: DUP 8.00MiB 00:09:00.132 SSD detected: yes 00:09:00.132 Zoned device: no 00:09:00.132 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:00.132 Checksum: crc32c 00:09:00.132 Number of devices: 1 00:09:00.132 Devices: 00:09:00.132 ID SIZE PATH 00:09:00.132 1 510.00MiB /dev/nvme0n1p1 00:09:00.132 00:09:00.132 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:00.132 19:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3584465 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:00.701 00:09:00.701 real 0m0.757s 00:09:00.701 user 0m0.024s 00:09:00.701 sys 0m0.083s 00:09:00.701 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:00.702 ************************************ 00:09:00.702 END TEST filesystem_in_capsule_btrfs 00:09:00.702 ************************************ 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:00.702 ************************************ 00:09:00.702 START TEST filesystem_in_capsule_xfs 00:09:00.702 ************************************ 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:00.702 19:15:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:00.702 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:00.702 = sectsz=512 attr=2, projid32bit=1 00:09:00.702 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:00.702 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:00.702 data = bsize=4096 blocks=130560, imaxpct=25 00:09:00.702 = sunit=0 swidth=0 blks 00:09:00.702 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:00.702 log =internal log bsize=4096 blocks=16384, version=2 00:09:00.702 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:00.702 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:01.651 Discarding blocks...Done. 00:09:01.651 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:01.651 19:15:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3584465 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:03.559 00:09:03.559 real 0m2.909s 00:09:03.559 user 0m0.021s 00:09:03.559 sys 0m0.050s 00:09:03.559 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.560 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:03.560 ************************************ 00:09:03.560 END TEST filesystem_in_capsule_xfs 00:09:03.560 ************************************ 00:09:03.560 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:03.560 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:03.560 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3584465 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3584465 ']' 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3584465 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3584465 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3584465' 00:09:03.819 killing process with pid 3584465 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3584465 00:09:03.819 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3584465 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:04.080 00:09:04.080 real 0m17.389s 00:09:04.080 user 1m8.718s 00:09:04.080 sys 0m1.113s 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:04.080 ************************************ 00:09:04.080 END TEST nvmf_filesystem_in_capsule 00:09:04.080 ************************************ 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.080 rmmod nvme_tcp 00:09:04.080 rmmod nvme_fabrics 00:09:04.080 rmmod nvme_keyring 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.080 19:15:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.619 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.619 00:09:06.619 real 0m45.761s 00:09:06.619 user 2m31.396s 00:09:06.619 sys 0m6.454s 00:09:06.619 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.619 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.619 ************************************ 00:09:06.619 END TEST nvmf_filesystem 00:09:06.619 ************************************ 00:09:06.619 19:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:06.619 19:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.619 19:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.619 19:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:06.619 ************************************ 00:09:06.619 START TEST nvmf_target_discovery 00:09:06.619 ************************************ 00:09:06.619 19:15:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:09:06.619 * Looking for test storage... 00:09:06.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.619 --rc genhtml_branch_coverage=1 00:09:06.619 --rc genhtml_function_coverage=1 00:09:06.619 --rc genhtml_legend=1 00:09:06.619 --rc geninfo_all_blocks=1 00:09:06.619 --rc geninfo_unexecuted_blocks=1 00:09:06.619 00:09:06.619 ' 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.619 --rc genhtml_branch_coverage=1 00:09:06.619 --rc genhtml_function_coverage=1 00:09:06.619 --rc genhtml_legend=1 00:09:06.619 --rc geninfo_all_blocks=1 00:09:06.619 --rc geninfo_unexecuted_blocks=1 00:09:06.619 00:09:06.619 ' 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.619 --rc genhtml_branch_coverage=1 00:09:06.619 --rc genhtml_function_coverage=1 00:09:06.619 --rc genhtml_legend=1 00:09:06.619 --rc geninfo_all_blocks=1 00:09:06.619 --rc geninfo_unexecuted_blocks=1 00:09:06.619 00:09:06.619 ' 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.619 --rc genhtml_branch_coverage=1 00:09:06.619 --rc genhtml_function_coverage=1 00:09:06.619 --rc genhtml_legend=1 00:09:06.619 --rc geninfo_all_blocks=1 00:09:06.619 --rc geninfo_unexecuted_blocks=1 00:09:06.619 00:09:06.619 ' 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.619 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.620 19:15:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:11.895 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:11.895 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:11.895 Found net devices under 0000:31:00.0: cvl_0_0 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:11.895 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:11.896 Found net devices under 0000:31:00.1: cvl_0_1 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:11.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:09:11.896 00:09:11.896 --- 10.0.0.2 ping statistics --- 00:09:11.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.896 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:11.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:09:11.896 00:09:11.896 --- 10.0.0.1 ping statistics --- 00:09:11.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.896 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3593465 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3593465 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3593465 ']' 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.896 19:15:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:11.896 [2024-11-26 19:15:45.544507] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:09:11.896 [2024-11-26 19:15:45.544572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.896 [2024-11-26 19:15:45.635339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:11.896 [2024-11-26 19:15:45.687763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.896 [2024-11-26 19:15:45.687813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.896 [2024-11-26 19:15:45.687821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.896 [2024-11-26 19:15:45.687833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.896 [2024-11-26 19:15:45.687840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.896 [2024-11-26 19:15:45.689858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:11.896 [2024-11-26 19:15:45.689994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:11.896 [2024-11-26 19:15:45.690156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:11.896 [2024-11-26 19:15:45.690212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.832 [2024-11-26 19:15:46.360501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:12.832 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 Null1 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 [2024-11-26 19:15:46.413427] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 Null2 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 Null3 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 Null4 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.833 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:09:13.092 00:09:13.092 Discovery Log Number of Records 6, Generation counter 6 00:09:13.092 =====Discovery Log Entry 0====== 00:09:13.092 trtype: tcp 00:09:13.092 adrfam: ipv4 00:09:13.092 subtype: current discovery subsystem 00:09:13.092 treq: not required 00:09:13.092 portid: 0 00:09:13.092 trsvcid: 4420 00:09:13.092 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:13.092 traddr: 10.0.0.2 00:09:13.092 eflags: explicit discovery connections, duplicate discovery information 00:09:13.092 sectype: none 00:09:13.092 =====Discovery Log Entry 1====== 00:09:13.092 trtype: tcp 00:09:13.092 adrfam: ipv4 00:09:13.092 subtype: nvme subsystem 00:09:13.092 treq: not required 00:09:13.092 portid: 0 00:09:13.092 trsvcid: 4420 00:09:13.092 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:13.092 traddr: 10.0.0.2 00:09:13.092 eflags: none 00:09:13.092 sectype: none 00:09:13.092 =====Discovery Log Entry 2====== 00:09:13.092 trtype: tcp 00:09:13.092 adrfam: ipv4 00:09:13.092 subtype: nvme subsystem 00:09:13.092 treq: not required 00:09:13.092 portid: 0 00:09:13.092 trsvcid: 4420 00:09:13.092 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:13.092 traddr: 10.0.0.2 00:09:13.092 eflags: none 00:09:13.092 sectype: none 00:09:13.092 =====Discovery Log Entry 3====== 00:09:13.092 trtype: tcp 00:09:13.092 adrfam: ipv4 00:09:13.092 subtype: nvme subsystem 00:09:13.092 treq: not required 00:09:13.092 portid: 0 00:09:13.092 trsvcid: 4420 00:09:13.092 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:13.092 traddr: 10.0.0.2 00:09:13.092 eflags: none 00:09:13.092 sectype: none 00:09:13.092 =====Discovery Log Entry 4====== 00:09:13.092 trtype: tcp 00:09:13.092 adrfam: ipv4 00:09:13.092 subtype: nvme subsystem 00:09:13.092 treq: not required 00:09:13.092 portid: 0 00:09:13.092 trsvcid: 4420 00:09:13.092 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:13.092 traddr: 10.0.0.2 00:09:13.092 eflags: none 00:09:13.092 sectype: none 00:09:13.092 =====Discovery Log Entry 5====== 00:09:13.092 trtype: tcp 00:09:13.092 adrfam: ipv4 00:09:13.092 subtype: discovery subsystem referral 00:09:13.092 treq: not required 00:09:13.092 portid: 0 00:09:13.092 trsvcid: 4430 00:09:13.092 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:13.092 traddr: 10.0.0.2 00:09:13.092 eflags: none 00:09:13.092 sectype: none 00:09:13.092 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:13.092 Perform nvmf subsystem discovery via RPC 00:09:13.092 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:13.092 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.092 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.092 [ 00:09:13.092 { 00:09:13.092 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:13.092 "subtype": "Discovery", 00:09:13.092 "listen_addresses": [ 00:09:13.092 { 00:09:13.092 "trtype": "TCP", 00:09:13.092 "adrfam": "IPv4", 00:09:13.092 "traddr": "10.0.0.2", 00:09:13.092 "trsvcid": "4420" 00:09:13.092 } 00:09:13.092 ], 00:09:13.092 "allow_any_host": true, 00:09:13.092 "hosts": [] 00:09:13.092 }, 00:09:13.092 { 00:09:13.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.092 "subtype": "NVMe", 00:09:13.092 "listen_addresses": [ 00:09:13.092 { 00:09:13.092 "trtype": "TCP", 00:09:13.092 "adrfam": "IPv4", 00:09:13.092 "traddr": "10.0.0.2", 00:09:13.092 "trsvcid": "4420" 00:09:13.092 } 00:09:13.092 ], 00:09:13.092 "allow_any_host": true, 00:09:13.092 "hosts": [], 00:09:13.092 "serial_number": "SPDK00000000000001", 00:09:13.092 "model_number": "SPDK bdev Controller", 00:09:13.092 "max_namespaces": 32, 00:09:13.092 "min_cntlid": 1, 00:09:13.092 "max_cntlid": 65519, 00:09:13.092 "namespaces": [ 00:09:13.092 { 00:09:13.092 "nsid": 1, 00:09:13.092 "bdev_name": "Null1", 00:09:13.092 "name": "Null1", 00:09:13.092 "nguid": "AA3863A6FC1A466D9DF6392A9057AFAC", 00:09:13.092 "uuid": "aa3863a6-fc1a-466d-9df6-392a9057afac" 00:09:13.092 } 00:09:13.092 ] 00:09:13.092 }, 00:09:13.092 { 00:09:13.092 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:13.092 "subtype": "NVMe", 00:09:13.092 "listen_addresses": [ 00:09:13.092 { 00:09:13.092 "trtype": "TCP", 00:09:13.092 "adrfam": "IPv4", 00:09:13.092 "traddr": "10.0.0.2", 00:09:13.092 "trsvcid": "4420" 00:09:13.092 } 00:09:13.092 ], 00:09:13.092 "allow_any_host": true, 00:09:13.093 "hosts": [], 00:09:13.093 "serial_number": "SPDK00000000000002", 00:09:13.093 "model_number": "SPDK bdev Controller", 00:09:13.093 "max_namespaces": 32, 00:09:13.093 "min_cntlid": 1, 00:09:13.093 "max_cntlid": 65519, 00:09:13.093 "namespaces": [ 00:09:13.093 { 00:09:13.093 "nsid": 1, 00:09:13.093 "bdev_name": "Null2", 00:09:13.093 "name": "Null2", 00:09:13.093 "nguid": "A9A7D6CC71E64DA8BB2347E3E225562F", 00:09:13.093 "uuid": "a9a7d6cc-71e6-4da8-bb23-47e3e225562f" 00:09:13.093 } 00:09:13.093 ] 00:09:13.093 }, 00:09:13.093 { 00:09:13.093 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:13.093 "subtype": "NVMe", 00:09:13.093 "listen_addresses": [ 00:09:13.093 { 00:09:13.093 "trtype": "TCP", 00:09:13.093 "adrfam": "IPv4", 00:09:13.093 "traddr": "10.0.0.2", 00:09:13.093 "trsvcid": "4420" 00:09:13.093 } 00:09:13.093 ], 00:09:13.093 "allow_any_host": true, 00:09:13.093 "hosts": [], 00:09:13.093 "serial_number": "SPDK00000000000003", 00:09:13.093 "model_number": "SPDK bdev Controller", 00:09:13.093 "max_namespaces": 32, 00:09:13.093 "min_cntlid": 1, 00:09:13.093 "max_cntlid": 65519, 00:09:13.093 "namespaces": [ 00:09:13.093 { 00:09:13.093 "nsid": 1, 00:09:13.093 "bdev_name": "Null3", 00:09:13.093 "name": "Null3", 00:09:13.093 "nguid": "1FCAD69D693B422AA3F2A022F04D030F", 00:09:13.093 "uuid": "1fcad69d-693b-422a-a3f2-a022f04d030f" 00:09:13.093 } 00:09:13.093 ] 00:09:13.093 }, 00:09:13.093 { 00:09:13.093 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:13.093 "subtype": "NVMe", 00:09:13.093 "listen_addresses": [ 00:09:13.093 { 00:09:13.093 "trtype": "TCP", 00:09:13.093 "adrfam": "IPv4", 00:09:13.093 "traddr": "10.0.0.2", 00:09:13.093 "trsvcid": "4420" 00:09:13.093 } 00:09:13.093 ], 00:09:13.093 "allow_any_host": true, 00:09:13.093 "hosts": [], 00:09:13.093 "serial_number": "SPDK00000000000004", 00:09:13.093 "model_number": "SPDK bdev Controller", 00:09:13.093 "max_namespaces": 32, 00:09:13.093 "min_cntlid": 1, 00:09:13.093 "max_cntlid": 65519, 00:09:13.093 "namespaces": [ 00:09:13.093 { 00:09:13.093 "nsid": 1, 00:09:13.093 "bdev_name": "Null4", 00:09:13.093 "name": "Null4", 00:09:13.093 "nguid": "6F92DEBB80E844DC954DEFFC809D26C8", 00:09:13.093 "uuid": "6f92debb-80e8-44dc-954d-effc809d26c8" 00:09:13.093 } 00:09:13.093 ] 00:09:13.093 } 00:09:13.093 ] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:13.093 rmmod nvme_tcp 00:09:13.093 rmmod nvme_fabrics 00:09:13.093 rmmod nvme_keyring 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3593465 ']' 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3593465 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3593465 ']' 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3593465 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.093 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3593465 00:09:13.352 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.352 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.352 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3593465' 00:09:13.352 killing process with pid 3593465 00:09:13.352 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3593465 00:09:13.352 19:15:46 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3593465 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.352 19:15:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:15.884 00:09:15.884 real 0m9.156s 00:09:15.884 user 0m7.218s 00:09:15.884 sys 0m4.378s 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:15.884 ************************************ 00:09:15.884 END TEST nvmf_target_discovery 00:09:15.884 ************************************ 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:15.884 ************************************ 00:09:15.884 START TEST nvmf_referrals 00:09:15.884 ************************************ 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:15.884 * Looking for test storage... 00:09:15.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:15.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.884 --rc genhtml_branch_coverage=1 00:09:15.884 --rc genhtml_function_coverage=1 00:09:15.884 --rc genhtml_legend=1 00:09:15.884 --rc geninfo_all_blocks=1 00:09:15.884 --rc geninfo_unexecuted_blocks=1 00:09:15.884 00:09:15.884 ' 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:15.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.884 --rc genhtml_branch_coverage=1 00:09:15.884 --rc genhtml_function_coverage=1 00:09:15.884 --rc genhtml_legend=1 00:09:15.884 --rc geninfo_all_blocks=1 00:09:15.884 --rc geninfo_unexecuted_blocks=1 00:09:15.884 00:09:15.884 ' 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:15.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.884 --rc genhtml_branch_coverage=1 00:09:15.884 --rc genhtml_function_coverage=1 00:09:15.884 --rc genhtml_legend=1 00:09:15.884 --rc geninfo_all_blocks=1 00:09:15.884 --rc geninfo_unexecuted_blocks=1 00:09:15.884 00:09:15.884 ' 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:15.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.884 --rc genhtml_branch_coverage=1 00:09:15.884 --rc genhtml_function_coverage=1 00:09:15.884 --rc genhtml_legend=1 00:09:15.884 --rc geninfo_all_blocks=1 00:09:15.884 --rc geninfo_unexecuted_blocks=1 00:09:15.884 00:09:15.884 ' 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:15.884 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:15.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:09:15.885 19:15:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:21.161 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:21.161 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:21.161 Found net devices under 0000:31:00.0: cvl_0_0 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:21.161 Found net devices under 0000:31:00.1: cvl_0_1 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.161 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.161 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:09:21.161 00:09:21.161 --- 10.0.0.2 ping statistics --- 00:09:21.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.161 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:09:21.161 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.161 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.161 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:09:21.161 00:09:21.161 --- 10.0.0.1 ping statistics --- 00:09:21.161 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.162 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3598266 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3598266 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3598266 ']' 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.162 19:15:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:21.162 [2024-11-26 19:15:55.001154] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:09:21.162 [2024-11-26 19:15:55.001222] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.421 [2024-11-26 19:15:55.092767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.421 [2024-11-26 19:15:55.146264] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.421 [2024-11-26 19:15:55.146318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.421 [2024-11-26 19:15:55.146327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.421 [2024-11-26 19:15:55.146334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.421 [2024-11-26 19:15:55.146341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.421 [2024-11-26 19:15:55.148447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.421 [2024-11-26 19:15:55.148609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.421 [2024-11-26 19:15:55.148739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.421 [2024-11-26 19:15:55.148739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.989 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:21.989 [2024-11-26 19:15:55.846500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.249 [2024-11-26 19:15:55.867518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.249 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:22.250 19:15:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.250 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:22.509 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:22.767 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:22.767 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.767 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.767 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:22.768 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:23.027 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:23.285 19:15:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:23.285 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:23.285 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:23.285 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:23.285 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:23.285 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:23.285 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:23.285 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:23.544 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:23.544 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:23.544 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:23.544 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:23.544 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:23.544 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:23.807 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.066 rmmod nvme_tcp 00:09:24.066 rmmod nvme_fabrics 00:09:24.066 rmmod nvme_keyring 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3598266 ']' 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3598266 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3598266 ']' 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3598266 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3598266 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:24.066 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3598266' 00:09:24.066 killing process with pid 3598266 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3598266 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3598266 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:24.067 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:09:24.325 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:24.325 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:24.325 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.325 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.325 19:15:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.230 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.230 00:09:26.230 real 0m10.786s 00:09:26.230 user 0m13.267s 00:09:26.230 sys 0m4.925s 00:09:26.230 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.230 19:15:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:26.230 ************************************ 00:09:26.230 END TEST nvmf_referrals 00:09:26.230 ************************************ 00:09:26.230 19:16:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:26.230 19:16:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.230 19:16:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.230 19:16:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:26.230 ************************************ 00:09:26.230 START TEST nvmf_connect_disconnect 00:09:26.230 ************************************ 00:09:26.230 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:26.230 * Looking for test storage... 00:09:26.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.490 --rc genhtml_branch_coverage=1 00:09:26.490 --rc genhtml_function_coverage=1 00:09:26.490 --rc genhtml_legend=1 00:09:26.490 --rc geninfo_all_blocks=1 00:09:26.490 --rc geninfo_unexecuted_blocks=1 00:09:26.490 00:09:26.490 ' 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.490 --rc genhtml_branch_coverage=1 00:09:26.490 --rc genhtml_function_coverage=1 00:09:26.490 --rc genhtml_legend=1 00:09:26.490 --rc geninfo_all_blocks=1 00:09:26.490 --rc geninfo_unexecuted_blocks=1 00:09:26.490 00:09:26.490 ' 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.490 --rc genhtml_branch_coverage=1 00:09:26.490 --rc genhtml_function_coverage=1 00:09:26.490 --rc genhtml_legend=1 00:09:26.490 --rc geninfo_all_blocks=1 00:09:26.490 --rc geninfo_unexecuted_blocks=1 00:09:26.490 00:09:26.490 ' 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.490 --rc genhtml_branch_coverage=1 00:09:26.490 --rc genhtml_function_coverage=1 00:09:26.490 --rc genhtml_legend=1 00:09:26.490 --rc geninfo_all_blocks=1 00:09:26.490 --rc geninfo_unexecuted_blocks=1 00:09:26.490 00:09:26.490 ' 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.490 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.491 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.491 19:16:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:31.844 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:31.844 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:31.844 Found net devices under 0000:31:00.0: cvl_0_0 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:31.844 Found net devices under 0000:31:00.1: cvl_0_1 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.844 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:09:31.845 00:09:31.845 --- 10.0.0.2 ping statistics --- 00:09:31.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.845 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:09:31.845 00:09:31.845 --- 10.0.0.1 ping statistics --- 00:09:31.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.845 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3603357 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3603357 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3603357 ']' 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:31.845 19:16:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:31.845 [2024-11-26 19:16:05.630437] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:09:31.845 [2024-11-26 19:16:05.630490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.105 [2024-11-26 19:16:05.715703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.105 [2024-11-26 19:16:05.751631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.105 [2024-11-26 19:16:05.751664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.105 [2024-11-26 19:16:05.751672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.105 [2024-11-26 19:16:05.751680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.105 [2024-11-26 19:16:05.751685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.105 [2024-11-26 19:16:05.753753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.105 [2024-11-26 19:16:05.753866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:32.105 [2024-11-26 19:16:05.754018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.105 [2024-11-26 19:16:05.754019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:32.673 [2024-11-26 19:16:06.432636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:32.673 [2024-11-26 19:16:06.491899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:32.673 19:16:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:36.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.926 rmmod nvme_tcp 00:09:50.926 rmmod nvme_fabrics 00:09:50.926 rmmod nvme_keyring 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3603357 ']' 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3603357 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3603357 ']' 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3603357 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:50.926 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3603357 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3603357' 00:09:51.185 killing process with pid 3603357 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3603357 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3603357 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.185 19:16:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.740 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:53.740 00:09:53.740 real 0m26.946s 00:09:53.740 user 1m17.704s 00:09:53.740 sys 0m5.306s 00:09:53.740 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.740 19:16:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:53.740 ************************************ 00:09:53.740 END TEST nvmf_connect_disconnect 00:09:53.740 ************************************ 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:53.740 ************************************ 00:09:53.740 START TEST nvmf_multitarget 00:09:53.740 ************************************ 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:53.740 * Looking for test storage... 00:09:53.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:53.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.740 --rc genhtml_branch_coverage=1 00:09:53.740 --rc genhtml_function_coverage=1 00:09:53.740 --rc genhtml_legend=1 00:09:53.740 --rc geninfo_all_blocks=1 00:09:53.740 --rc geninfo_unexecuted_blocks=1 00:09:53.740 00:09:53.740 ' 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:53.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.740 --rc genhtml_branch_coverage=1 00:09:53.740 --rc genhtml_function_coverage=1 00:09:53.740 --rc genhtml_legend=1 00:09:53.740 --rc geninfo_all_blocks=1 00:09:53.740 --rc geninfo_unexecuted_blocks=1 00:09:53.740 00:09:53.740 ' 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:53.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.740 --rc genhtml_branch_coverage=1 00:09:53.740 --rc genhtml_function_coverage=1 00:09:53.740 --rc genhtml_legend=1 00:09:53.740 --rc geninfo_all_blocks=1 00:09:53.740 --rc geninfo_unexecuted_blocks=1 00:09:53.740 00:09:53.740 ' 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:53.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:53.740 --rc genhtml_branch_coverage=1 00:09:53.740 --rc genhtml_function_coverage=1 00:09:53.740 --rc genhtml_legend=1 00:09:53.740 --rc geninfo_all_blocks=1 00:09:53.740 --rc geninfo_unexecuted_blocks=1 00:09:53.740 00:09:53.740 ' 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.740 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:53.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:09:53.741 19:16:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:59.013 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:59.013 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.013 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:59.014 Found net devices under 0000:31:00.0: cvl_0_0 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:59.014 Found net devices under 0000:31:00.1: cvl_0_1 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:09:59.014 00:09:59.014 --- 10.0.0.2 ping statistics --- 00:09:59.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.014 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:09:59.014 00:09:59.014 --- 10.0.0.1 ping statistics --- 00:09:59.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.014 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3612033 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3612033 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3612033 ']' 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:59.014 19:16:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.014 [2024-11-26 19:16:32.593477] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:09:59.014 [2024-11-26 19:16:32.593538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.014 [2024-11-26 19:16:32.686565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.014 [2024-11-26 19:16:32.740643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.014 [2024-11-26 19:16:32.740695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.014 [2024-11-26 19:16:32.740703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.014 [2024-11-26 19:16:32.740710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.014 [2024-11-26 19:16:32.740716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.014 [2024-11-26 19:16:32.742784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.014 [2024-11-26 19:16:32.742920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.014 [2024-11-26 19:16:32.743046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.014 [2024-11-26 19:16:32.743046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:59.582 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:59.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:59.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:59.842 "nvmf_tgt_1" 00:09:59.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:59.842 "nvmf_tgt_2" 00:09:59.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:59.842 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:00.101 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:00.101 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:00.101 true 00:10:00.101 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:00.101 true 00:10:00.101 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:00.101 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:00.360 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:00.360 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:00.360 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:00.360 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:00.360 19:16:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:00.360 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:00.360 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:00.360 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:00.360 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:00.360 rmmod nvme_tcp 00:10:00.360 rmmod nvme_fabrics 00:10:00.360 rmmod nvme_keyring 00:10:00.360 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:00.360 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:00.360 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:00.360 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3612033 ']' 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3612033 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3612033 ']' 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3612033 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3612033 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3612033' 00:10:00.361 killing process with pid 3612033 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3612033 00:10:00.361 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3612033 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.657 19:16:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.565 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:02.565 00:10:02.565 real 0m9.256s 00:10:02.565 user 0m8.010s 00:10:02.565 sys 0m4.447s 00:10:02.565 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.565 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:02.565 ************************************ 00:10:02.565 END TEST nvmf_multitarget 00:10:02.565 ************************************ 00:10:02.565 19:16:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:02.565 19:16:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:02.565 19:16:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.566 19:16:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:02.566 ************************************ 00:10:02.566 START TEST nvmf_rpc 00:10:02.566 ************************************ 00:10:02.566 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:02.566 * Looking for test storage... 00:10:02.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.566 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:02.566 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:02.566 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.826 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:02.826 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.826 --rc genhtml_branch_coverage=1 00:10:02.826 --rc genhtml_function_coverage=1 00:10:02.826 --rc genhtml_legend=1 00:10:02.826 --rc geninfo_all_blocks=1 00:10:02.826 --rc geninfo_unexecuted_blocks=1 00:10:02.827 00:10:02.827 ' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.827 --rc genhtml_branch_coverage=1 00:10:02.827 --rc genhtml_function_coverage=1 00:10:02.827 --rc genhtml_legend=1 00:10:02.827 --rc geninfo_all_blocks=1 00:10:02.827 --rc geninfo_unexecuted_blocks=1 00:10:02.827 00:10:02.827 ' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.827 --rc genhtml_branch_coverage=1 00:10:02.827 --rc genhtml_function_coverage=1 00:10:02.827 --rc genhtml_legend=1 00:10:02.827 --rc geninfo_all_blocks=1 00:10:02.827 --rc geninfo_unexecuted_blocks=1 00:10:02.827 00:10:02.827 ' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:02.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.827 --rc genhtml_branch_coverage=1 00:10:02.827 --rc genhtml_function_coverage=1 00:10:02.827 --rc genhtml_legend=1 00:10:02.827 --rc geninfo_all_blocks=1 00:10:02.827 --rc geninfo_unexecuted_blocks=1 00:10:02.827 00:10:02.827 ' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.827 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:02.827 19:16:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.106 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:08.107 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:08.107 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:08.107 Found net devices under 0000:31:00.0: cvl_0_0 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:08.107 Found net devices under 0000:31:00.1: cvl_0_1 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:10:08.107 00:10:08.107 --- 10.0.0.2 ping statistics --- 00:10:08.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.107 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:10:08.107 00:10:08.107 --- 10.0.0.1 ping statistics --- 00:10:08.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.107 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.107 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3616766 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3616766 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3616766 ']' 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.367 19:16:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.367 [2024-11-26 19:16:42.026633] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:10:08.367 [2024-11-26 19:16:42.026702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.367 [2024-11-26 19:16:42.118941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.367 [2024-11-26 19:16:42.173153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.367 [2024-11-26 19:16:42.173209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.367 [2024-11-26 19:16:42.173219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.367 [2024-11-26 19:16:42.173226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.367 [2024-11-26 19:16:42.173232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.367 [2024-11-26 19:16:42.175660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.367 [2024-11-26 19:16:42.175820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.367 [2024-11-26 19:16:42.175983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.367 [2024-11-26 19:16:42.175983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.305 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:09.305 "tick_rate": 2400000000, 00:10:09.305 "poll_groups": [ 00:10:09.305 { 00:10:09.305 "name": "nvmf_tgt_poll_group_000", 00:10:09.305 "admin_qpairs": 0, 00:10:09.305 "io_qpairs": 0, 00:10:09.305 "current_admin_qpairs": 0, 00:10:09.305 "current_io_qpairs": 0, 00:10:09.305 "pending_bdev_io": 0, 00:10:09.305 "completed_nvme_io": 0, 00:10:09.305 "transports": [] 00:10:09.305 }, 00:10:09.305 { 00:10:09.305 "name": "nvmf_tgt_poll_group_001", 00:10:09.305 "admin_qpairs": 0, 00:10:09.305 "io_qpairs": 0, 00:10:09.305 "current_admin_qpairs": 0, 00:10:09.305 "current_io_qpairs": 0, 00:10:09.305 "pending_bdev_io": 0, 00:10:09.305 "completed_nvme_io": 0, 00:10:09.305 "transports": [] 00:10:09.305 }, 00:10:09.305 { 00:10:09.305 "name": "nvmf_tgt_poll_group_002", 00:10:09.306 "admin_qpairs": 0, 00:10:09.306 "io_qpairs": 0, 00:10:09.306 "current_admin_qpairs": 0, 00:10:09.306 "current_io_qpairs": 0, 00:10:09.306 "pending_bdev_io": 0, 00:10:09.306 "completed_nvme_io": 0, 00:10:09.306 "transports": [] 00:10:09.306 }, 00:10:09.306 { 00:10:09.306 "name": "nvmf_tgt_poll_group_003", 00:10:09.306 "admin_qpairs": 0, 00:10:09.306 "io_qpairs": 0, 00:10:09.306 "current_admin_qpairs": 0, 00:10:09.306 "current_io_qpairs": 0, 00:10:09.306 "pending_bdev_io": 0, 00:10:09.306 "completed_nvme_io": 0, 00:10:09.306 "transports": [] 00:10:09.306 } 00:10:09.306 ] 00:10:09.306 }' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 [2024-11-26 19:16:42.946734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:09.306 "tick_rate": 2400000000, 00:10:09.306 "poll_groups": [ 00:10:09.306 { 00:10:09.306 "name": "nvmf_tgt_poll_group_000", 00:10:09.306 "admin_qpairs": 0, 00:10:09.306 "io_qpairs": 0, 00:10:09.306 "current_admin_qpairs": 0, 00:10:09.306 "current_io_qpairs": 0, 00:10:09.306 "pending_bdev_io": 0, 00:10:09.306 "completed_nvme_io": 0, 00:10:09.306 "transports": [ 00:10:09.306 { 00:10:09.306 "trtype": "TCP" 00:10:09.306 } 00:10:09.306 ] 00:10:09.306 }, 00:10:09.306 { 00:10:09.306 "name": "nvmf_tgt_poll_group_001", 00:10:09.306 "admin_qpairs": 0, 00:10:09.306 "io_qpairs": 0, 00:10:09.306 "current_admin_qpairs": 0, 00:10:09.306 "current_io_qpairs": 0, 00:10:09.306 "pending_bdev_io": 0, 00:10:09.306 "completed_nvme_io": 0, 00:10:09.306 "transports": [ 00:10:09.306 { 00:10:09.306 "trtype": "TCP" 00:10:09.306 } 00:10:09.306 ] 00:10:09.306 }, 00:10:09.306 { 00:10:09.306 "name": "nvmf_tgt_poll_group_002", 00:10:09.306 "admin_qpairs": 0, 00:10:09.306 "io_qpairs": 0, 00:10:09.306 "current_admin_qpairs": 0, 00:10:09.306 "current_io_qpairs": 0, 00:10:09.306 "pending_bdev_io": 0, 00:10:09.306 "completed_nvme_io": 0, 00:10:09.306 "transports": [ 00:10:09.306 { 00:10:09.306 "trtype": "TCP" 00:10:09.306 } 00:10:09.306 ] 00:10:09.306 }, 00:10:09.306 { 00:10:09.306 "name": "nvmf_tgt_poll_group_003", 00:10:09.306 "admin_qpairs": 0, 00:10:09.306 "io_qpairs": 0, 00:10:09.306 "current_admin_qpairs": 0, 00:10:09.306 "current_io_qpairs": 0, 00:10:09.306 "pending_bdev_io": 0, 00:10:09.306 "completed_nvme_io": 0, 00:10:09.306 "transports": [ 00:10:09.306 { 00:10:09.306 "trtype": "TCP" 00:10:09.306 } 00:10:09.306 ] 00:10:09.306 } 00:10:09.306 ] 00:10:09.306 }' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:09.306 19:16:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 Malloc1 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.306 [2024-11-26 19:16:43.094665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.306 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.2 -s 4420 00:10:09.307 [2024-11-26 19:16:43.117785] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:10:09.307 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:09.307 could not add new controller: failed to write to nvme-fabrics device 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.307 19:16:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.214 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.214 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:11.214 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.214 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:11.214 19:16:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.118 [2024-11-26 19:16:46.873813] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb' 00:10:13.118 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:13.118 could not add new controller: failed to write to nvme-fabrics device 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.118 19:16:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.024 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.024 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:15.024 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.024 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:15.024 19:16:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.927 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.928 [2024-11-26 19:16:50.529116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.928 19:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.370 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.370 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:18.370 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.370 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:18.370 19:16:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.352 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.610 [2024-11-26 19:16:54.230818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.610 19:16:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:21.989 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:21.989 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:21.989 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:21.989 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:21.989 19:16:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:24.524 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.525 [2024-11-26 19:16:57.955730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.525 19:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.903 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:25.903 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:25.903 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.903 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:25.903 19:16:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 [2024-11-26 19:17:01.603255] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.806 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.807 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:27.807 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.807 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:27.807 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.807 19:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:29.710 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:29.710 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:29.710 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:29.710 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:29.710 19:17:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.622 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.623 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.624 [2024-11-26 19:17:05.299241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.624 19:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:33.008 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:33.008 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:10:33.008 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:33.008 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:33.008 19:17:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.546 19:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.546 [2024-11-26 19:17:09.003498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.546 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.546 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.546 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.546 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.546 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.546 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 [2024-11-26 19:17:09.051608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 [2024-11-26 19:17:09.099720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 [2024-11-26 19:17:09.147883] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 [2024-11-26 19:17:09.196031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:35.547 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:35.548 "tick_rate": 2400000000, 00:10:35.548 "poll_groups": [ 00:10:35.548 { 00:10:35.548 "name": "nvmf_tgt_poll_group_000", 00:10:35.548 "admin_qpairs": 0, 00:10:35.548 "io_qpairs": 224, 00:10:35.548 "current_admin_qpairs": 0, 00:10:35.548 "current_io_qpairs": 0, 00:10:35.548 "pending_bdev_io": 0, 00:10:35.548 "completed_nvme_io": 273, 00:10:35.548 "transports": [ 00:10:35.548 { 00:10:35.548 "trtype": "TCP" 00:10:35.548 } 00:10:35.548 ] 00:10:35.548 }, 00:10:35.548 { 00:10:35.548 "name": "nvmf_tgt_poll_group_001", 00:10:35.548 "admin_qpairs": 1, 00:10:35.548 "io_qpairs": 223, 00:10:35.548 "current_admin_qpairs": 0, 00:10:35.548 "current_io_qpairs": 0, 00:10:35.548 "pending_bdev_io": 0, 00:10:35.548 "completed_nvme_io": 386, 00:10:35.548 "transports": [ 00:10:35.548 { 00:10:35.548 "trtype": "TCP" 00:10:35.548 } 00:10:35.548 ] 00:10:35.548 }, 00:10:35.548 { 00:10:35.548 "name": "nvmf_tgt_poll_group_002", 00:10:35.548 "admin_qpairs": 6, 00:10:35.548 "io_qpairs": 218, 00:10:35.548 "current_admin_qpairs": 0, 00:10:35.548 "current_io_qpairs": 0, 00:10:35.548 "pending_bdev_io": 0, 00:10:35.548 "completed_nvme_io": 355, 00:10:35.548 "transports": [ 00:10:35.548 { 00:10:35.548 "trtype": "TCP" 00:10:35.548 } 00:10:35.548 ] 00:10:35.548 }, 00:10:35.548 { 00:10:35.548 "name": "nvmf_tgt_poll_group_003", 00:10:35.548 "admin_qpairs": 0, 00:10:35.548 "io_qpairs": 224, 00:10:35.548 "current_admin_qpairs": 0, 00:10:35.548 "current_io_qpairs": 0, 00:10:35.548 "pending_bdev_io": 0, 00:10:35.548 "completed_nvme_io": 225, 00:10:35.548 "transports": [ 00:10:35.548 { 00:10:35.548 "trtype": "TCP" 00:10:35.548 } 00:10:35.548 ] 00:10:35.548 } 00:10:35.548 ] 00:10:35.548 }' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.548 rmmod nvme_tcp 00:10:35.548 rmmod nvme_fabrics 00:10:35.548 rmmod nvme_keyring 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3616766 ']' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3616766 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3616766 ']' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3616766 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3616766 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3616766' 00:10:35.548 killing process with pid 3616766 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3616766 00:10:35.548 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3616766 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.807 19:17:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.712 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.712 00:10:37.712 real 0m35.242s 00:10:37.712 user 1m50.302s 00:10:37.712 sys 0m6.084s 00:10:37.712 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.712 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.712 ************************************ 00:10:37.712 END TEST nvmf_rpc 00:10:37.712 ************************************ 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.973 ************************************ 00:10:37.973 START TEST nvmf_invalid 00:10:37.973 ************************************ 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:37.973 * Looking for test storage... 00:10:37.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.973 --rc genhtml_branch_coverage=1 00:10:37.973 --rc genhtml_function_coverage=1 00:10:37.973 --rc genhtml_legend=1 00:10:37.973 --rc geninfo_all_blocks=1 00:10:37.973 --rc geninfo_unexecuted_blocks=1 00:10:37.973 00:10:37.973 ' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.973 --rc genhtml_branch_coverage=1 00:10:37.973 --rc genhtml_function_coverage=1 00:10:37.973 --rc genhtml_legend=1 00:10:37.973 --rc geninfo_all_blocks=1 00:10:37.973 --rc geninfo_unexecuted_blocks=1 00:10:37.973 00:10:37.973 ' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.973 --rc genhtml_branch_coverage=1 00:10:37.973 --rc genhtml_function_coverage=1 00:10:37.973 --rc genhtml_legend=1 00:10:37.973 --rc geninfo_all_blocks=1 00:10:37.973 --rc geninfo_unexecuted_blocks=1 00:10:37.973 00:10:37.973 ' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.973 --rc genhtml_branch_coverage=1 00:10:37.973 --rc genhtml_function_coverage=1 00:10:37.973 --rc genhtml_legend=1 00:10:37.973 --rc geninfo_all_blocks=1 00:10:37.973 --rc geninfo_unexecuted_blocks=1 00:10:37.973 00:10:37.973 ' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.973 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.974 19:17:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:43.248 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:43.248 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:43.248 Found net devices under 0000:31:00.0: cvl_0_0 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:43.248 Found net devices under 0000:31:00.1: cvl_0_1 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.248 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.249 19:17:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:43.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:10:43.249 00:10:43.249 --- 10.0.0.2 ping statistics --- 00:10:43.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.249 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:10:43.249 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:10:43.508 00:10:43.508 --- 10.0.0.1 ping statistics --- 00:10:43.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.508 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3627262 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3627262 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3627262 ']' 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:43.508 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.508 [2024-11-26 19:17:17.187853] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:10:43.508 [2024-11-26 19:17:17.187903] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.508 [2024-11-26 19:17:17.274883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.508 [2024-11-26 19:17:17.317201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.508 [2024-11-26 19:17:17.317240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.508 [2024-11-26 19:17:17.317248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.508 [2024-11-26 19:17:17.317255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.508 [2024-11-26 19:17:17.317261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.508 [2024-11-26 19:17:17.318900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.508 [2024-11-26 19:17:17.319025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.508 [2024-11-26 19:17:17.319176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.508 [2024-11-26 19:17:17.319176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.444 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.444 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:10:44.444 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:44.444 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.444 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:44.444 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.444 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:44.444 19:17:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8849 00:10:44.444 [2024-11-26 19:17:18.131731] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:44.444 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:44.444 { 00:10:44.444 "nqn": "nqn.2016-06.io.spdk:cnode8849", 00:10:44.444 "tgt_name": "foobar", 00:10:44.444 "method": "nvmf_create_subsystem", 00:10:44.444 "req_id": 1 00:10:44.444 } 00:10:44.444 Got JSON-RPC error response 00:10:44.444 response: 00:10:44.444 { 00:10:44.444 "code": -32603, 00:10:44.444 "message": "Unable to find target foobar" 00:10:44.444 }' 00:10:44.444 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:44.444 { 00:10:44.444 "nqn": "nqn.2016-06.io.spdk:cnode8849", 00:10:44.444 "tgt_name": "foobar", 00:10:44.444 "method": "nvmf_create_subsystem", 00:10:44.444 "req_id": 1 00:10:44.444 } 00:10:44.444 Got JSON-RPC error response 00:10:44.444 response: 00:10:44.444 { 00:10:44.444 "code": -32603, 00:10:44.444 "message": "Unable to find target foobar" 00:10:44.444 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:44.444 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:44.444 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4792 00:10:44.444 [2024-11-26 19:17:18.300291] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4792: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:44.704 { 00:10:44.704 "nqn": "nqn.2016-06.io.spdk:cnode4792", 00:10:44.704 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:44.704 "method": "nvmf_create_subsystem", 00:10:44.704 "req_id": 1 00:10:44.704 } 00:10:44.704 Got JSON-RPC error response 00:10:44.704 response: 00:10:44.704 { 00:10:44.704 "code": -32602, 00:10:44.704 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:44.704 }' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:44.704 { 00:10:44.704 "nqn": "nqn.2016-06.io.spdk:cnode4792", 00:10:44.704 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:44.704 "method": "nvmf_create_subsystem", 00:10:44.704 "req_id": 1 00:10:44.704 } 00:10:44.704 Got JSON-RPC error response 00:10:44.704 response: 00:10:44.704 { 00:10:44.704 "code": -32602, 00:10:44.704 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:44.704 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11516 00:10:44.704 [2024-11-26 19:17:18.464860] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11516: invalid model number 'SPDK_Controller' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:44.704 { 00:10:44.704 "nqn": "nqn.2016-06.io.spdk:cnode11516", 00:10:44.704 "model_number": "SPDK_Controller\u001f", 00:10:44.704 "method": "nvmf_create_subsystem", 00:10:44.704 "req_id": 1 00:10:44.704 } 00:10:44.704 Got JSON-RPC error response 00:10:44.704 response: 00:10:44.704 { 00:10:44.704 "code": -32602, 00:10:44.704 "message": "Invalid MN SPDK_Controller\u001f" 00:10:44.704 }' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:44.704 { 00:10:44.704 "nqn": "nqn.2016-06.io.spdk:cnode11516", 00:10:44.704 "model_number": "SPDK_Controller\u001f", 00:10:44.704 "method": "nvmf_create_subsystem", 00:10:44.704 "req_id": 1 00:10:44.704 } 00:10:44.704 Got JSON-RPC error response 00:10:44.704 response: 00:10:44.704 { 00:10:44.704 "code": -32602, 00:10:44.704 "message": "Invalid MN SPDK_Controller\u001f" 00:10:44.704 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:44.704 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.705 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ p == \- ]] 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'p`|A5Ce\^QJCVro+9^0X'\''' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'p`|A5Ce\^QJCVro+9^0X'\''' nqn.2016-06.io.spdk:cnode1478 00:10:44.965 [2024-11-26 19:17:18.717655] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1478: invalid serial number 'p`|A5Ce\^QJCVro+9^0X'' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:44.965 { 00:10:44.965 "nqn": "nqn.2016-06.io.spdk:cnode1478", 00:10:44.965 "serial_number": "p`|A5Ce\\^QJCVro+9^0X'\''", 00:10:44.965 "method": "nvmf_create_subsystem", 00:10:44.965 "req_id": 1 00:10:44.965 } 00:10:44.965 Got JSON-RPC error response 00:10:44.965 response: 00:10:44.965 { 00:10:44.965 "code": -32602, 00:10:44.965 "message": "Invalid SN p`|A5Ce\\^QJCVro+9^0X'\''" 00:10:44.965 }' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:44.965 { 00:10:44.965 "nqn": "nqn.2016-06.io.spdk:cnode1478", 00:10:44.965 "serial_number": "p`|A5Ce\\^QJCVro+9^0X'", 00:10:44.965 "method": "nvmf_create_subsystem", 00:10:44.965 "req_id": 1 00:10:44.965 } 00:10:44.965 Got JSON-RPC error response 00:10:44.965 response: 00:10:44.965 { 00:10:44.965 "code": -32602, 00:10:44.965 "message": "Invalid SN p`|A5Ce\\^QJCVro+9^0X'" 00:10:44.965 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:44.965 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:44.966 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.226 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'dsz*7Wc|F%A>-#_Q*]e_$KrPm#7(HSY)rGy+G%1)K' 00:10:45.227 19:17:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'dsz*7Wc|F%A>-#_Q*]e_$KrPm#7(HSY)rGy+G%1)K' nqn.2016-06.io.spdk:cnode19703 00:10:45.227 [2024-11-26 19:17:19.066822] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19703: invalid model number 'dsz*7Wc|F%A>-#_Q*]e_$KrPm#7(HSY)rGy+G%1)K' 00:10:45.227 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:45.227 { 00:10:45.227 "nqn": "nqn.2016-06.io.spdk:cnode19703", 00:10:45.227 "model_number": "dsz*7Wc|F%A>-#_Q*]e_$KrPm#7(HSY)rGy+G%1)K", 00:10:45.227 "method": "nvmf_create_subsystem", 00:10:45.227 "req_id": 1 00:10:45.227 } 00:10:45.227 Got JSON-RPC error response 00:10:45.227 response: 00:10:45.227 { 00:10:45.227 "code": -32602, 00:10:45.227 "message": "Invalid MN dsz*7Wc|F%A>-#_Q*]e_$KrPm#7(HSY)rGy+G%1)K" 00:10:45.227 }' 00:10:45.227 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:45.227 { 00:10:45.227 "nqn": "nqn.2016-06.io.spdk:cnode19703", 00:10:45.227 "model_number": "dsz*7Wc|F%A>-#_Q*]e_$KrPm#7(HSY)rGy+G%1)K", 00:10:45.227 "method": "nvmf_create_subsystem", 00:10:45.227 "req_id": 1 00:10:45.227 } 00:10:45.227 Got JSON-RPC error response 00:10:45.227 response: 00:10:45.227 { 00:10:45.227 "code": -32602, 00:10:45.227 "message": "Invalid MN dsz*7Wc|F%A>-#_Q*]e_$KrPm#7(HSY)rGy+G%1)K" 00:10:45.227 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:45.227 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:45.489 [2024-11-26 19:17:19.227405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.489 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:45.751 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:45.751 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:45.751 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:45.751 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:45.751 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:45.751 [2024-11-26 19:17:19.553391] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:45.751 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:45.751 { 00:10:45.751 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:45.751 "listen_address": { 00:10:45.751 "trtype": "tcp", 00:10:45.751 "traddr": "", 00:10:45.751 "trsvcid": "4421" 00:10:45.751 }, 00:10:45.751 "method": "nvmf_subsystem_remove_listener", 00:10:45.751 "req_id": 1 00:10:45.751 } 00:10:45.751 Got JSON-RPC error response 00:10:45.751 response: 00:10:45.751 { 00:10:45.751 "code": -32602, 00:10:45.751 "message": "Invalid parameters" 00:10:45.751 }' 00:10:45.751 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:45.751 { 00:10:45.751 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:45.751 "listen_address": { 00:10:45.751 "trtype": "tcp", 00:10:45.751 "traddr": "", 00:10:45.751 "trsvcid": "4421" 00:10:45.751 }, 00:10:45.751 "method": "nvmf_subsystem_remove_listener", 00:10:45.751 "req_id": 1 00:10:45.751 } 00:10:45.751 Got JSON-RPC error response 00:10:45.751 response: 00:10:45.751 { 00:10:45.751 "code": -32602, 00:10:45.751 "message": "Invalid parameters" 00:10:45.751 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:45.751 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11646 -i 0 00:10:46.009 [2024-11-26 19:17:19.713828] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11646: invalid cntlid range [0-65519] 00:10:46.009 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:46.009 { 00:10:46.009 "nqn": "nqn.2016-06.io.spdk:cnode11646", 00:10:46.009 "min_cntlid": 0, 00:10:46.009 "method": "nvmf_create_subsystem", 00:10:46.009 "req_id": 1 00:10:46.009 } 00:10:46.009 Got JSON-RPC error response 00:10:46.009 response: 00:10:46.009 { 00:10:46.009 "code": -32602, 00:10:46.009 "message": "Invalid cntlid range [0-65519]" 00:10:46.009 }' 00:10:46.009 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:46.009 { 00:10:46.009 "nqn": "nqn.2016-06.io.spdk:cnode11646", 00:10:46.009 "min_cntlid": 0, 00:10:46.009 "method": "nvmf_create_subsystem", 00:10:46.009 "req_id": 1 00:10:46.009 } 00:10:46.009 Got JSON-RPC error response 00:10:46.009 response: 00:10:46.009 { 00:10:46.009 "code": -32602, 00:10:46.009 "message": "Invalid cntlid range [0-65519]" 00:10:46.009 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:46.009 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19663 -i 65520 00:10:46.268 [2024-11-26 19:17:19.874347] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19663: invalid cntlid range [65520-65519] 00:10:46.268 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:46.268 { 00:10:46.268 "nqn": "nqn.2016-06.io.spdk:cnode19663", 00:10:46.268 "min_cntlid": 65520, 00:10:46.268 "method": "nvmf_create_subsystem", 00:10:46.268 "req_id": 1 00:10:46.268 } 00:10:46.268 Got JSON-RPC error response 00:10:46.268 response: 00:10:46.268 { 00:10:46.268 "code": -32602, 00:10:46.268 "message": "Invalid cntlid range [65520-65519]" 00:10:46.268 }' 00:10:46.268 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:46.268 { 00:10:46.268 "nqn": "nqn.2016-06.io.spdk:cnode19663", 00:10:46.268 "min_cntlid": 65520, 00:10:46.268 "method": "nvmf_create_subsystem", 00:10:46.268 "req_id": 1 00:10:46.268 } 00:10:46.268 Got JSON-RPC error response 00:10:46.268 response: 00:10:46.268 { 00:10:46.268 "code": -32602, 00:10:46.268 "message": "Invalid cntlid range [65520-65519]" 00:10:46.268 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:46.268 19:17:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12 -I 0 00:10:46.268 [2024-11-26 19:17:20.038867] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12: invalid cntlid range [1-0] 00:10:46.268 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:46.268 { 00:10:46.268 "nqn": "nqn.2016-06.io.spdk:cnode12", 00:10:46.268 "max_cntlid": 0, 00:10:46.268 "method": "nvmf_create_subsystem", 00:10:46.268 "req_id": 1 00:10:46.268 } 00:10:46.268 Got JSON-RPC error response 00:10:46.268 response: 00:10:46.268 { 00:10:46.268 "code": -32602, 00:10:46.268 "message": "Invalid cntlid range [1-0]" 00:10:46.268 }' 00:10:46.268 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:46.268 { 00:10:46.268 "nqn": "nqn.2016-06.io.spdk:cnode12", 00:10:46.268 "max_cntlid": 0, 00:10:46.268 "method": "nvmf_create_subsystem", 00:10:46.268 "req_id": 1 00:10:46.268 } 00:10:46.268 Got JSON-RPC error response 00:10:46.268 response: 00:10:46.268 { 00:10:46.268 "code": -32602, 00:10:46.268 "message": "Invalid cntlid range [1-0]" 00:10:46.268 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:46.268 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32163 -I 65520 00:10:46.528 [2024-11-26 19:17:20.203379] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32163: invalid cntlid range [1-65520] 00:10:46.528 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:46.528 { 00:10:46.528 "nqn": "nqn.2016-06.io.spdk:cnode32163", 00:10:46.528 "max_cntlid": 65520, 00:10:46.528 "method": "nvmf_create_subsystem", 00:10:46.528 "req_id": 1 00:10:46.528 } 00:10:46.528 Got JSON-RPC error response 00:10:46.528 response: 00:10:46.528 { 00:10:46.528 "code": -32602, 00:10:46.528 "message": "Invalid cntlid range [1-65520]" 00:10:46.528 }' 00:10:46.528 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:46.528 { 00:10:46.528 "nqn": "nqn.2016-06.io.spdk:cnode32163", 00:10:46.528 "max_cntlid": 65520, 00:10:46.528 "method": "nvmf_create_subsystem", 00:10:46.528 "req_id": 1 00:10:46.528 } 00:10:46.528 Got JSON-RPC error response 00:10:46.528 response: 00:10:46.528 { 00:10:46.528 "code": -32602, 00:10:46.528 "message": "Invalid cntlid range [1-65520]" 00:10:46.528 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:46.528 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode544 -i 6 -I 5 00:10:46.528 [2024-11-26 19:17:20.363878] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode544: invalid cntlid range [6-5] 00:10:46.528 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:46.528 { 00:10:46.528 "nqn": "nqn.2016-06.io.spdk:cnode544", 00:10:46.528 "min_cntlid": 6, 00:10:46.528 "max_cntlid": 5, 00:10:46.528 "method": "nvmf_create_subsystem", 00:10:46.528 "req_id": 1 00:10:46.528 } 00:10:46.528 Got JSON-RPC error response 00:10:46.528 response: 00:10:46.528 { 00:10:46.528 "code": -32602, 00:10:46.528 "message": "Invalid cntlid range [6-5]" 00:10:46.528 }' 00:10:46.528 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:46.528 { 00:10:46.528 "nqn": "nqn.2016-06.io.spdk:cnode544", 00:10:46.528 "min_cntlid": 6, 00:10:46.528 "max_cntlid": 5, 00:10:46.528 "method": "nvmf_create_subsystem", 00:10:46.528 "req_id": 1 00:10:46.528 } 00:10:46.528 Got JSON-RPC error response 00:10:46.528 response: 00:10:46.528 { 00:10:46.528 "code": -32602, 00:10:46.528 "message": "Invalid cntlid range [6-5]" 00:10:46.528 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:46.528 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:46.787 { 00:10:46.787 "name": "foobar", 00:10:46.787 "method": "nvmf_delete_target", 00:10:46.787 "req_id": 1 00:10:46.787 } 00:10:46.787 Got JSON-RPC error response 00:10:46.787 response: 00:10:46.787 { 00:10:46.787 "code": -32602, 00:10:46.787 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:46.787 }' 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:46.787 { 00:10:46.787 "name": "foobar", 00:10:46.787 "method": "nvmf_delete_target", 00:10:46.787 "req_id": 1 00:10:46.787 } 00:10:46.787 Got JSON-RPC error response 00:10:46.787 response: 00:10:46.787 { 00:10:46.787 "code": -32602, 00:10:46.787 "message": "The specified target doesn't exist, cannot delete it." 00:10:46.787 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.787 rmmod nvme_tcp 00:10:46.787 rmmod nvme_fabrics 00:10:46.787 rmmod nvme_keyring 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3627262 ']' 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3627262 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3627262 ']' 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3627262 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3627262 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3627262' 00:10:46.787 killing process with pid 3627262 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3627262 00:10:46.787 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3627262 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.046 19:17:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.949 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.949 00:10:48.949 real 0m11.120s 00:10:48.949 user 0m16.875s 00:10:48.949 sys 0m4.847s 00:10:48.949 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.949 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:48.949 ************************************ 00:10:48.949 END TEST nvmf_invalid 00:10:48.949 ************************************ 00:10:48.949 19:17:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:48.949 19:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.949 19:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.949 19:17:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.949 ************************************ 00:10:48.949 START TEST nvmf_connect_stress 00:10:48.949 ************************************ 00:10:48.949 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:49.209 * Looking for test storage... 00:10:49.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:49.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.209 --rc genhtml_branch_coverage=1 00:10:49.209 --rc genhtml_function_coverage=1 00:10:49.209 --rc genhtml_legend=1 00:10:49.209 --rc geninfo_all_blocks=1 00:10:49.209 --rc geninfo_unexecuted_blocks=1 00:10:49.209 00:10:49.209 ' 00:10:49.209 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:49.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.209 --rc genhtml_branch_coverage=1 00:10:49.209 --rc genhtml_function_coverage=1 00:10:49.209 --rc genhtml_legend=1 00:10:49.209 --rc geninfo_all_blocks=1 00:10:49.209 --rc geninfo_unexecuted_blocks=1 00:10:49.209 00:10:49.209 ' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:49.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.210 --rc genhtml_branch_coverage=1 00:10:49.210 --rc genhtml_function_coverage=1 00:10:49.210 --rc genhtml_legend=1 00:10:49.210 --rc geninfo_all_blocks=1 00:10:49.210 --rc geninfo_unexecuted_blocks=1 00:10:49.210 00:10:49.210 ' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:49.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.210 --rc genhtml_branch_coverage=1 00:10:49.210 --rc genhtml_function_coverage=1 00:10:49.210 --rc genhtml_legend=1 00:10:49.210 --rc geninfo_all_blocks=1 00:10:49.210 --rc geninfo_unexecuted_blocks=1 00:10:49.210 00:10:49.210 ' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.210 19:17:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:54.489 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:54.490 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:54.490 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:54.490 Found net devices under 0000:31:00.0: cvl_0_0 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:54.490 Found net devices under 0000:31:00.1: cvl_0_1 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.490 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:54.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:10:54.749 00:10:54.749 --- 10.0.0.2 ping statistics --- 00:10:54.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.749 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:10:54.749 00:10:54.749 --- 10.0.0.1 ping statistics --- 00:10:54.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.749 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3632623 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3632623 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3632623 ']' 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.749 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.750 19:17:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.008 [2024-11-26 19:17:28.625263] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:10:55.008 [2024-11-26 19:17:28.625331] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.008 [2024-11-26 19:17:28.717701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.008 [2024-11-26 19:17:28.769949] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.008 [2024-11-26 19:17:28.770000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.008 [2024-11-26 19:17:28.770010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.008 [2024-11-26 19:17:28.770017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.008 [2024-11-26 19:17:28.770024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.008 [2024-11-26 19:17:28.771980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.008 [2024-11-26 19:17:28.772160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.008 [2024-11-26 19:17:28.772190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.576 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.835 [2024-11-26 19:17:29.441581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.835 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.835 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.836 [2024-11-26 19:17:29.457919] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.836 NULL1 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3632812 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.836 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.096 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.096 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:56.096 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.096 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.096 19:17:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.368 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.368 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:56.369 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.369 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.369 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.935 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.935 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:56.935 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.935 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.935 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.193 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.193 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:57.193 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.193 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.193 19:17:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.511 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.511 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:57.511 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.511 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.511 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.830 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.830 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:57.830 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.830 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.830 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.088 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.088 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:58.088 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.088 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.088 19:17:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.347 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.347 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:58.347 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.347 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.347 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.606 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.606 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:58.606 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.606 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.606 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.175 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.175 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:59.175 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.175 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.175 19:17:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.435 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.435 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:59.435 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.435 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.435 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.694 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.694 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:59.694 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.694 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.694 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:59.954 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.954 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:10:59.954 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.954 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.954 19:17:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.214 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.214 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:00.214 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.214 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.214 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:00.783 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.783 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:00.783 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.783 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.783 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.042 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.042 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:01.042 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.042 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.042 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.301 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.301 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:01.301 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.301 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.301 19:17:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.559 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.559 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:01.559 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.559 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.559 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:01.819 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.819 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:01.819 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.819 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.819 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.388 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.388 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:02.388 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.388 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.388 19:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.647 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.647 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:02.647 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.647 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.647 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:02.906 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.906 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:02.906 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:02.906 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.906 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.165 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.165 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:03.165 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.165 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.165 19:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.425 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.425 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:03.425 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.425 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.425 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:03.995 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.995 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:03.995 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:03.995 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.995 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.255 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.255 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:04.255 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.255 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.255 19:17:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.513 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.513 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:04.513 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.513 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.513 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:04.774 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.774 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:04.774 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:04.774 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.774 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.033 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.033 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:05.033 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.033 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.033 19:17:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.601 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.601 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:05.601 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.601 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.601 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.859 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.859 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:05.859 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:05.859 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.859 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.859 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3632812 00:11:06.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3632812) - No such process 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3632812 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.118 rmmod nvme_tcp 00:11:06.118 rmmod nvme_fabrics 00:11:06.118 rmmod nvme_keyring 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3632623 ']' 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3632623 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3632623 ']' 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3632623 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3632623 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3632623' 00:11:06.118 killing process with pid 3632623 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3632623 00:11:06.118 19:17:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3632623 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.378 19:17:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.282 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.282 00:11:08.282 real 0m19.291s 00:11:08.282 user 0m41.937s 00:11:08.282 sys 0m7.507s 00:11:08.282 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.282 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:08.282 ************************************ 00:11:08.282 END TEST nvmf_connect_stress 00:11:08.282 ************************************ 00:11:08.282 19:17:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:08.282 19:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:08.282 19:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.282 19:17:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.282 ************************************ 00:11:08.282 START TEST nvmf_fused_ordering 00:11:08.282 ************************************ 00:11:08.282 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:08.541 * Looking for test storage... 00:11:08.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.541 --rc genhtml_branch_coverage=1 00:11:08.541 --rc genhtml_function_coverage=1 00:11:08.541 --rc genhtml_legend=1 00:11:08.541 --rc geninfo_all_blocks=1 00:11:08.541 --rc geninfo_unexecuted_blocks=1 00:11:08.541 00:11:08.541 ' 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.541 --rc genhtml_branch_coverage=1 00:11:08.541 --rc genhtml_function_coverage=1 00:11:08.541 --rc genhtml_legend=1 00:11:08.541 --rc geninfo_all_blocks=1 00:11:08.541 --rc geninfo_unexecuted_blocks=1 00:11:08.541 00:11:08.541 ' 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.541 --rc genhtml_branch_coverage=1 00:11:08.541 --rc genhtml_function_coverage=1 00:11:08.541 --rc genhtml_legend=1 00:11:08.541 --rc geninfo_all_blocks=1 00:11:08.541 --rc geninfo_unexecuted_blocks=1 00:11:08.541 00:11:08.541 ' 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:08.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.541 --rc genhtml_branch_coverage=1 00:11:08.541 --rc genhtml_function_coverage=1 00:11:08.541 --rc genhtml_legend=1 00:11:08.541 --rc geninfo_all_blocks=1 00:11:08.541 --rc geninfo_unexecuted_blocks=1 00:11:08.541 00:11:08.541 ' 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.541 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:08.542 19:17:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:13.813 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:13.813 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:13.813 Found net devices under 0000:31:00.0: cvl_0_0 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:13.813 Found net devices under 0000:31:00.1: cvl_0_1 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.813 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:13.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:11:13.814 00:11:13.814 --- 10.0.0.2 ping statistics --- 00:11:13.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.814 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:11:13.814 00:11:13.814 --- 10.0.0.1 ping statistics --- 00:11:13.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.814 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:13.814 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3639515 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3639515 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3639515 ']' 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.073 [2024-11-26 19:17:47.732932] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:11:14.073 [2024-11-26 19:17:47.732981] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.073 [2024-11-26 19:17:47.803387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.073 [2024-11-26 19:17:47.832042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.073 [2024-11-26 19:17:47.832068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.073 [2024-11-26 19:17:47.832075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.073 [2024-11-26 19:17:47.832079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.073 [2024-11-26 19:17:47.832083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.073 [2024-11-26 19:17:47.832516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.073 [2024-11-26 19:17:47.931201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.073 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.332 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 [2024-11-26 19:17:47.947407] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 NULL1 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.333 19:17:47 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:14.333 [2024-11-26 19:17:47.989175] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:11:14.333 [2024-11-26 19:17:47.989202] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639540 ] 00:11:14.593 Attached to nqn.2016-06.io.spdk:cnode1 00:11:14.593 Namespace ID: 1 size: 1GB 00:11:14.593 fused_ordering(0) 00:11:14.593 fused_ordering(1) 00:11:14.593 fused_ordering(2) 00:11:14.593 fused_ordering(3) 00:11:14.593 fused_ordering(4) 00:11:14.593 fused_ordering(5) 00:11:14.593 fused_ordering(6) 00:11:14.593 fused_ordering(7) 00:11:14.593 fused_ordering(8) 00:11:14.593 fused_ordering(9) 00:11:14.593 fused_ordering(10) 00:11:14.593 fused_ordering(11) 00:11:14.593 fused_ordering(12) 00:11:14.593 fused_ordering(13) 00:11:14.594 fused_ordering(14) 00:11:14.594 fused_ordering(15) 00:11:14.594 fused_ordering(16) 00:11:14.594 fused_ordering(17) 00:11:14.594 fused_ordering(18) 00:11:14.594 fused_ordering(19) 00:11:14.594 fused_ordering(20) 00:11:14.594 fused_ordering(21) 00:11:14.594 fused_ordering(22) 00:11:14.594 fused_ordering(23) 00:11:14.594 fused_ordering(24) 00:11:14.594 fused_ordering(25) 00:11:14.594 fused_ordering(26) 00:11:14.594 fused_ordering(27) 00:11:14.594 fused_ordering(28) 00:11:14.594 fused_ordering(29) 00:11:14.594 fused_ordering(30) 00:11:14.594 fused_ordering(31) 00:11:14.594 fused_ordering(32) 00:11:14.594 fused_ordering(33) 00:11:14.594 fused_ordering(34) 00:11:14.594 fused_ordering(35) 00:11:14.594 fused_ordering(36) 00:11:14.594 fused_ordering(37) 00:11:14.594 fused_ordering(38) 00:11:14.594 fused_ordering(39) 00:11:14.594 fused_ordering(40) 00:11:14.594 fused_ordering(41) 00:11:14.594 fused_ordering(42) 00:11:14.594 fused_ordering(43) 00:11:14.594 fused_ordering(44) 00:11:14.594 fused_ordering(45) 00:11:14.594 fused_ordering(46) 00:11:14.594 fused_ordering(47) 00:11:14.594 fused_ordering(48) 00:11:14.594 fused_ordering(49) 00:11:14.594 fused_ordering(50) 00:11:14.594 fused_ordering(51) 00:11:14.594 fused_ordering(52) 00:11:14.594 fused_ordering(53) 00:11:14.594 fused_ordering(54) 00:11:14.594 fused_ordering(55) 00:11:14.594 fused_ordering(56) 00:11:14.594 fused_ordering(57) 00:11:14.594 fused_ordering(58) 00:11:14.594 fused_ordering(59) 00:11:14.594 fused_ordering(60) 00:11:14.594 fused_ordering(61) 00:11:14.594 fused_ordering(62) 00:11:14.594 fused_ordering(63) 00:11:14.594 fused_ordering(64) 00:11:14.594 fused_ordering(65) 00:11:14.594 fused_ordering(66) 00:11:14.594 fused_ordering(67) 00:11:14.594 fused_ordering(68) 00:11:14.594 fused_ordering(69) 00:11:14.594 fused_ordering(70) 00:11:14.594 fused_ordering(71) 00:11:14.594 fused_ordering(72) 00:11:14.594 fused_ordering(73) 00:11:14.594 fused_ordering(74) 00:11:14.594 fused_ordering(75) 00:11:14.594 fused_ordering(76) 00:11:14.594 fused_ordering(77) 00:11:14.594 fused_ordering(78) 00:11:14.594 fused_ordering(79) 00:11:14.594 fused_ordering(80) 00:11:14.594 fused_ordering(81) 00:11:14.594 fused_ordering(82) 00:11:14.594 fused_ordering(83) 00:11:14.594 fused_ordering(84) 00:11:14.594 fused_ordering(85) 00:11:14.594 fused_ordering(86) 00:11:14.594 fused_ordering(87) 00:11:14.594 fused_ordering(88) 00:11:14.594 fused_ordering(89) 00:11:14.594 fused_ordering(90) 00:11:14.594 fused_ordering(91) 00:11:14.594 fused_ordering(92) 00:11:14.594 fused_ordering(93) 00:11:14.594 fused_ordering(94) 00:11:14.594 fused_ordering(95) 00:11:14.594 fused_ordering(96) 00:11:14.594 fused_ordering(97) 00:11:14.594 fused_ordering(98) 00:11:14.594 fused_ordering(99) 00:11:14.594 fused_ordering(100) 00:11:14.594 fused_ordering(101) 00:11:14.594 fused_ordering(102) 00:11:14.594 fused_ordering(103) 00:11:14.594 fused_ordering(104) 00:11:14.594 fused_ordering(105) 00:11:14.594 fused_ordering(106) 00:11:14.594 fused_ordering(107) 00:11:14.594 fused_ordering(108) 00:11:14.594 fused_ordering(109) 00:11:14.594 fused_ordering(110) 00:11:14.594 fused_ordering(111) 00:11:14.594 fused_ordering(112) 00:11:14.594 fused_ordering(113) 00:11:14.594 fused_ordering(114) 00:11:14.594 fused_ordering(115) 00:11:14.594 fused_ordering(116) 00:11:14.594 fused_ordering(117) 00:11:14.594 fused_ordering(118) 00:11:14.594 fused_ordering(119) 00:11:14.594 fused_ordering(120) 00:11:14.594 fused_ordering(121) 00:11:14.594 fused_ordering(122) 00:11:14.594 fused_ordering(123) 00:11:14.594 fused_ordering(124) 00:11:14.594 fused_ordering(125) 00:11:14.594 fused_ordering(126) 00:11:14.594 fused_ordering(127) 00:11:14.594 fused_ordering(128) 00:11:14.594 fused_ordering(129) 00:11:14.594 fused_ordering(130) 00:11:14.594 fused_ordering(131) 00:11:14.594 fused_ordering(132) 00:11:14.594 fused_ordering(133) 00:11:14.594 fused_ordering(134) 00:11:14.594 fused_ordering(135) 00:11:14.594 fused_ordering(136) 00:11:14.594 fused_ordering(137) 00:11:14.594 fused_ordering(138) 00:11:14.594 fused_ordering(139) 00:11:14.594 fused_ordering(140) 00:11:14.594 fused_ordering(141) 00:11:14.594 fused_ordering(142) 00:11:14.594 fused_ordering(143) 00:11:14.594 fused_ordering(144) 00:11:14.594 fused_ordering(145) 00:11:14.594 fused_ordering(146) 00:11:14.594 fused_ordering(147) 00:11:14.594 fused_ordering(148) 00:11:14.594 fused_ordering(149) 00:11:14.594 fused_ordering(150) 00:11:14.594 fused_ordering(151) 00:11:14.594 fused_ordering(152) 00:11:14.594 fused_ordering(153) 00:11:14.594 fused_ordering(154) 00:11:14.594 fused_ordering(155) 00:11:14.594 fused_ordering(156) 00:11:14.594 fused_ordering(157) 00:11:14.594 fused_ordering(158) 00:11:14.594 fused_ordering(159) 00:11:14.594 fused_ordering(160) 00:11:14.594 fused_ordering(161) 00:11:14.594 fused_ordering(162) 00:11:14.594 fused_ordering(163) 00:11:14.594 fused_ordering(164) 00:11:14.594 fused_ordering(165) 00:11:14.594 fused_ordering(166) 00:11:14.594 fused_ordering(167) 00:11:14.594 fused_ordering(168) 00:11:14.594 fused_ordering(169) 00:11:14.594 fused_ordering(170) 00:11:14.594 fused_ordering(171) 00:11:14.594 fused_ordering(172) 00:11:14.594 fused_ordering(173) 00:11:14.594 fused_ordering(174) 00:11:14.594 fused_ordering(175) 00:11:14.594 fused_ordering(176) 00:11:14.594 fused_ordering(177) 00:11:14.594 fused_ordering(178) 00:11:14.594 fused_ordering(179) 00:11:14.594 fused_ordering(180) 00:11:14.594 fused_ordering(181) 00:11:14.594 fused_ordering(182) 00:11:14.594 fused_ordering(183) 00:11:14.594 fused_ordering(184) 00:11:14.594 fused_ordering(185) 00:11:14.594 fused_ordering(186) 00:11:14.594 fused_ordering(187) 00:11:14.594 fused_ordering(188) 00:11:14.594 fused_ordering(189) 00:11:14.594 fused_ordering(190) 00:11:14.594 fused_ordering(191) 00:11:14.594 fused_ordering(192) 00:11:14.594 fused_ordering(193) 00:11:14.594 fused_ordering(194) 00:11:14.594 fused_ordering(195) 00:11:14.594 fused_ordering(196) 00:11:14.594 fused_ordering(197) 00:11:14.594 fused_ordering(198) 00:11:14.594 fused_ordering(199) 00:11:14.594 fused_ordering(200) 00:11:14.594 fused_ordering(201) 00:11:14.594 fused_ordering(202) 00:11:14.594 fused_ordering(203) 00:11:14.594 fused_ordering(204) 00:11:14.594 fused_ordering(205) 00:11:14.855 fused_ordering(206) 00:11:14.855 fused_ordering(207) 00:11:14.855 fused_ordering(208) 00:11:14.855 fused_ordering(209) 00:11:14.855 fused_ordering(210) 00:11:14.855 fused_ordering(211) 00:11:14.855 fused_ordering(212) 00:11:14.855 fused_ordering(213) 00:11:14.855 fused_ordering(214) 00:11:14.855 fused_ordering(215) 00:11:14.855 fused_ordering(216) 00:11:14.855 fused_ordering(217) 00:11:14.855 fused_ordering(218) 00:11:14.855 fused_ordering(219) 00:11:14.855 fused_ordering(220) 00:11:14.855 fused_ordering(221) 00:11:14.855 fused_ordering(222) 00:11:14.855 fused_ordering(223) 00:11:14.855 fused_ordering(224) 00:11:14.855 fused_ordering(225) 00:11:14.855 fused_ordering(226) 00:11:14.855 fused_ordering(227) 00:11:14.855 fused_ordering(228) 00:11:14.855 fused_ordering(229) 00:11:14.855 fused_ordering(230) 00:11:14.855 fused_ordering(231) 00:11:14.855 fused_ordering(232) 00:11:14.855 fused_ordering(233) 00:11:14.855 fused_ordering(234) 00:11:14.855 fused_ordering(235) 00:11:14.855 fused_ordering(236) 00:11:14.855 fused_ordering(237) 00:11:14.855 fused_ordering(238) 00:11:14.855 fused_ordering(239) 00:11:14.855 fused_ordering(240) 00:11:14.855 fused_ordering(241) 00:11:14.855 fused_ordering(242) 00:11:14.855 fused_ordering(243) 00:11:14.855 fused_ordering(244) 00:11:14.855 fused_ordering(245) 00:11:14.855 fused_ordering(246) 00:11:14.855 fused_ordering(247) 00:11:14.855 fused_ordering(248) 00:11:14.855 fused_ordering(249) 00:11:14.855 fused_ordering(250) 00:11:14.855 fused_ordering(251) 00:11:14.855 fused_ordering(252) 00:11:14.855 fused_ordering(253) 00:11:14.855 fused_ordering(254) 00:11:14.855 fused_ordering(255) 00:11:14.855 fused_ordering(256) 00:11:14.855 fused_ordering(257) 00:11:14.855 fused_ordering(258) 00:11:14.855 fused_ordering(259) 00:11:14.855 fused_ordering(260) 00:11:14.855 fused_ordering(261) 00:11:14.855 fused_ordering(262) 00:11:14.855 fused_ordering(263) 00:11:14.855 fused_ordering(264) 00:11:14.855 fused_ordering(265) 00:11:14.855 fused_ordering(266) 00:11:14.855 fused_ordering(267) 00:11:14.855 fused_ordering(268) 00:11:14.855 fused_ordering(269) 00:11:14.855 fused_ordering(270) 00:11:14.855 fused_ordering(271) 00:11:14.855 fused_ordering(272) 00:11:14.855 fused_ordering(273) 00:11:14.855 fused_ordering(274) 00:11:14.855 fused_ordering(275) 00:11:14.855 fused_ordering(276) 00:11:14.855 fused_ordering(277) 00:11:14.855 fused_ordering(278) 00:11:14.855 fused_ordering(279) 00:11:14.855 fused_ordering(280) 00:11:14.855 fused_ordering(281) 00:11:14.855 fused_ordering(282) 00:11:14.855 fused_ordering(283) 00:11:14.855 fused_ordering(284) 00:11:14.855 fused_ordering(285) 00:11:14.855 fused_ordering(286) 00:11:14.855 fused_ordering(287) 00:11:14.855 fused_ordering(288) 00:11:14.855 fused_ordering(289) 00:11:14.855 fused_ordering(290) 00:11:14.855 fused_ordering(291) 00:11:14.855 fused_ordering(292) 00:11:14.855 fused_ordering(293) 00:11:14.855 fused_ordering(294) 00:11:14.855 fused_ordering(295) 00:11:14.855 fused_ordering(296) 00:11:14.855 fused_ordering(297) 00:11:14.855 fused_ordering(298) 00:11:14.855 fused_ordering(299) 00:11:14.855 fused_ordering(300) 00:11:14.855 fused_ordering(301) 00:11:14.855 fused_ordering(302) 00:11:14.855 fused_ordering(303) 00:11:14.855 fused_ordering(304) 00:11:14.855 fused_ordering(305) 00:11:14.855 fused_ordering(306) 00:11:14.855 fused_ordering(307) 00:11:14.855 fused_ordering(308) 00:11:14.855 fused_ordering(309) 00:11:14.855 fused_ordering(310) 00:11:14.855 fused_ordering(311) 00:11:14.855 fused_ordering(312) 00:11:14.855 fused_ordering(313) 00:11:14.855 fused_ordering(314) 00:11:14.855 fused_ordering(315) 00:11:14.855 fused_ordering(316) 00:11:14.855 fused_ordering(317) 00:11:14.855 fused_ordering(318) 00:11:14.855 fused_ordering(319) 00:11:14.855 fused_ordering(320) 00:11:14.855 fused_ordering(321) 00:11:14.855 fused_ordering(322) 00:11:14.855 fused_ordering(323) 00:11:14.855 fused_ordering(324) 00:11:14.855 fused_ordering(325) 00:11:14.855 fused_ordering(326) 00:11:14.855 fused_ordering(327) 00:11:14.855 fused_ordering(328) 00:11:14.855 fused_ordering(329) 00:11:14.856 fused_ordering(330) 00:11:14.856 fused_ordering(331) 00:11:14.856 fused_ordering(332) 00:11:14.856 fused_ordering(333) 00:11:14.856 fused_ordering(334) 00:11:14.856 fused_ordering(335) 00:11:14.856 fused_ordering(336) 00:11:14.856 fused_ordering(337) 00:11:14.856 fused_ordering(338) 00:11:14.856 fused_ordering(339) 00:11:14.856 fused_ordering(340) 00:11:14.856 fused_ordering(341) 00:11:14.856 fused_ordering(342) 00:11:14.856 fused_ordering(343) 00:11:14.856 fused_ordering(344) 00:11:14.856 fused_ordering(345) 00:11:14.856 fused_ordering(346) 00:11:14.856 fused_ordering(347) 00:11:14.856 fused_ordering(348) 00:11:14.856 fused_ordering(349) 00:11:14.856 fused_ordering(350) 00:11:14.856 fused_ordering(351) 00:11:14.856 fused_ordering(352) 00:11:14.856 fused_ordering(353) 00:11:14.856 fused_ordering(354) 00:11:14.856 fused_ordering(355) 00:11:14.856 fused_ordering(356) 00:11:14.856 fused_ordering(357) 00:11:14.856 fused_ordering(358) 00:11:14.856 fused_ordering(359) 00:11:14.856 fused_ordering(360) 00:11:14.856 fused_ordering(361) 00:11:14.856 fused_ordering(362) 00:11:14.856 fused_ordering(363) 00:11:14.856 fused_ordering(364) 00:11:14.856 fused_ordering(365) 00:11:14.856 fused_ordering(366) 00:11:14.856 fused_ordering(367) 00:11:14.856 fused_ordering(368) 00:11:14.856 fused_ordering(369) 00:11:14.856 fused_ordering(370) 00:11:14.856 fused_ordering(371) 00:11:14.856 fused_ordering(372) 00:11:14.856 fused_ordering(373) 00:11:14.856 fused_ordering(374) 00:11:14.856 fused_ordering(375) 00:11:14.856 fused_ordering(376) 00:11:14.856 fused_ordering(377) 00:11:14.856 fused_ordering(378) 00:11:14.856 fused_ordering(379) 00:11:14.856 fused_ordering(380) 00:11:14.856 fused_ordering(381) 00:11:14.856 fused_ordering(382) 00:11:14.856 fused_ordering(383) 00:11:14.856 fused_ordering(384) 00:11:14.856 fused_ordering(385) 00:11:14.856 fused_ordering(386) 00:11:14.856 fused_ordering(387) 00:11:14.856 fused_ordering(388) 00:11:14.856 fused_ordering(389) 00:11:14.856 fused_ordering(390) 00:11:14.856 fused_ordering(391) 00:11:14.856 fused_ordering(392) 00:11:14.856 fused_ordering(393) 00:11:14.856 fused_ordering(394) 00:11:14.856 fused_ordering(395) 00:11:14.856 fused_ordering(396) 00:11:14.856 fused_ordering(397) 00:11:14.856 fused_ordering(398) 00:11:14.856 fused_ordering(399) 00:11:14.856 fused_ordering(400) 00:11:14.856 fused_ordering(401) 00:11:14.856 fused_ordering(402) 00:11:14.856 fused_ordering(403) 00:11:14.856 fused_ordering(404) 00:11:14.856 fused_ordering(405) 00:11:14.856 fused_ordering(406) 00:11:14.856 fused_ordering(407) 00:11:14.856 fused_ordering(408) 00:11:14.856 fused_ordering(409) 00:11:14.856 fused_ordering(410) 00:11:15.424 fused_ordering(411) 00:11:15.424 fused_ordering(412) 00:11:15.424 fused_ordering(413) 00:11:15.424 fused_ordering(414) 00:11:15.424 fused_ordering(415) 00:11:15.424 fused_ordering(416) 00:11:15.424 fused_ordering(417) 00:11:15.424 fused_ordering(418) 00:11:15.424 fused_ordering(419) 00:11:15.424 fused_ordering(420) 00:11:15.424 fused_ordering(421) 00:11:15.424 fused_ordering(422) 00:11:15.424 fused_ordering(423) 00:11:15.424 fused_ordering(424) 00:11:15.424 fused_ordering(425) 00:11:15.424 fused_ordering(426) 00:11:15.424 fused_ordering(427) 00:11:15.424 fused_ordering(428) 00:11:15.424 fused_ordering(429) 00:11:15.424 fused_ordering(430) 00:11:15.424 fused_ordering(431) 00:11:15.424 fused_ordering(432) 00:11:15.424 fused_ordering(433) 00:11:15.424 fused_ordering(434) 00:11:15.424 fused_ordering(435) 00:11:15.424 fused_ordering(436) 00:11:15.424 fused_ordering(437) 00:11:15.424 fused_ordering(438) 00:11:15.425 fused_ordering(439) 00:11:15.425 fused_ordering(440) 00:11:15.425 fused_ordering(441) 00:11:15.425 fused_ordering(442) 00:11:15.425 fused_ordering(443) 00:11:15.425 fused_ordering(444) 00:11:15.425 fused_ordering(445) 00:11:15.425 fused_ordering(446) 00:11:15.425 fused_ordering(447) 00:11:15.425 fused_ordering(448) 00:11:15.425 fused_ordering(449) 00:11:15.425 fused_ordering(450) 00:11:15.425 fused_ordering(451) 00:11:15.425 fused_ordering(452) 00:11:15.425 fused_ordering(453) 00:11:15.425 fused_ordering(454) 00:11:15.425 fused_ordering(455) 00:11:15.425 fused_ordering(456) 00:11:15.425 fused_ordering(457) 00:11:15.425 fused_ordering(458) 00:11:15.425 fused_ordering(459) 00:11:15.425 fused_ordering(460) 00:11:15.425 fused_ordering(461) 00:11:15.425 fused_ordering(462) 00:11:15.425 fused_ordering(463) 00:11:15.425 fused_ordering(464) 00:11:15.425 fused_ordering(465) 00:11:15.425 fused_ordering(466) 00:11:15.425 fused_ordering(467) 00:11:15.425 fused_ordering(468) 00:11:15.425 fused_ordering(469) 00:11:15.425 fused_ordering(470) 00:11:15.425 fused_ordering(471) 00:11:15.425 fused_ordering(472) 00:11:15.425 fused_ordering(473) 00:11:15.425 fused_ordering(474) 00:11:15.425 fused_ordering(475) 00:11:15.425 fused_ordering(476) 00:11:15.425 fused_ordering(477) 00:11:15.425 fused_ordering(478) 00:11:15.425 fused_ordering(479) 00:11:15.425 fused_ordering(480) 00:11:15.425 fused_ordering(481) 00:11:15.425 fused_ordering(482) 00:11:15.425 fused_ordering(483) 00:11:15.425 fused_ordering(484) 00:11:15.425 fused_ordering(485) 00:11:15.425 fused_ordering(486) 00:11:15.425 fused_ordering(487) 00:11:15.425 fused_ordering(488) 00:11:15.425 fused_ordering(489) 00:11:15.425 fused_ordering(490) 00:11:15.425 fused_ordering(491) 00:11:15.425 fused_ordering(492) 00:11:15.425 fused_ordering(493) 00:11:15.425 fused_ordering(494) 00:11:15.425 fused_ordering(495) 00:11:15.425 fused_ordering(496) 00:11:15.425 fused_ordering(497) 00:11:15.425 fused_ordering(498) 00:11:15.425 fused_ordering(499) 00:11:15.425 fused_ordering(500) 00:11:15.425 fused_ordering(501) 00:11:15.425 fused_ordering(502) 00:11:15.425 fused_ordering(503) 00:11:15.425 fused_ordering(504) 00:11:15.425 fused_ordering(505) 00:11:15.425 fused_ordering(506) 00:11:15.425 fused_ordering(507) 00:11:15.425 fused_ordering(508) 00:11:15.425 fused_ordering(509) 00:11:15.425 fused_ordering(510) 00:11:15.425 fused_ordering(511) 00:11:15.425 fused_ordering(512) 00:11:15.425 fused_ordering(513) 00:11:15.425 fused_ordering(514) 00:11:15.425 fused_ordering(515) 00:11:15.425 fused_ordering(516) 00:11:15.425 fused_ordering(517) 00:11:15.425 fused_ordering(518) 00:11:15.425 fused_ordering(519) 00:11:15.425 fused_ordering(520) 00:11:15.425 fused_ordering(521) 00:11:15.425 fused_ordering(522) 00:11:15.425 fused_ordering(523) 00:11:15.425 fused_ordering(524) 00:11:15.425 fused_ordering(525) 00:11:15.425 fused_ordering(526) 00:11:15.425 fused_ordering(527) 00:11:15.425 fused_ordering(528) 00:11:15.425 fused_ordering(529) 00:11:15.425 fused_ordering(530) 00:11:15.425 fused_ordering(531) 00:11:15.425 fused_ordering(532) 00:11:15.425 fused_ordering(533) 00:11:15.425 fused_ordering(534) 00:11:15.425 fused_ordering(535) 00:11:15.425 fused_ordering(536) 00:11:15.425 fused_ordering(537) 00:11:15.425 fused_ordering(538) 00:11:15.425 fused_ordering(539) 00:11:15.425 fused_ordering(540) 00:11:15.425 fused_ordering(541) 00:11:15.425 fused_ordering(542) 00:11:15.425 fused_ordering(543) 00:11:15.425 fused_ordering(544) 00:11:15.425 fused_ordering(545) 00:11:15.425 fused_ordering(546) 00:11:15.425 fused_ordering(547) 00:11:15.425 fused_ordering(548) 00:11:15.425 fused_ordering(549) 00:11:15.425 fused_ordering(550) 00:11:15.425 fused_ordering(551) 00:11:15.425 fused_ordering(552) 00:11:15.425 fused_ordering(553) 00:11:15.425 fused_ordering(554) 00:11:15.425 fused_ordering(555) 00:11:15.425 fused_ordering(556) 00:11:15.425 fused_ordering(557) 00:11:15.425 fused_ordering(558) 00:11:15.425 fused_ordering(559) 00:11:15.425 fused_ordering(560) 00:11:15.425 fused_ordering(561) 00:11:15.425 fused_ordering(562) 00:11:15.425 fused_ordering(563) 00:11:15.425 fused_ordering(564) 00:11:15.425 fused_ordering(565) 00:11:15.425 fused_ordering(566) 00:11:15.425 fused_ordering(567) 00:11:15.425 fused_ordering(568) 00:11:15.425 fused_ordering(569) 00:11:15.425 fused_ordering(570) 00:11:15.425 fused_ordering(571) 00:11:15.425 fused_ordering(572) 00:11:15.425 fused_ordering(573) 00:11:15.425 fused_ordering(574) 00:11:15.425 fused_ordering(575) 00:11:15.425 fused_ordering(576) 00:11:15.425 fused_ordering(577) 00:11:15.425 fused_ordering(578) 00:11:15.425 fused_ordering(579) 00:11:15.425 fused_ordering(580) 00:11:15.425 fused_ordering(581) 00:11:15.425 fused_ordering(582) 00:11:15.425 fused_ordering(583) 00:11:15.425 fused_ordering(584) 00:11:15.425 fused_ordering(585) 00:11:15.425 fused_ordering(586) 00:11:15.425 fused_ordering(587) 00:11:15.425 fused_ordering(588) 00:11:15.425 fused_ordering(589) 00:11:15.425 fused_ordering(590) 00:11:15.425 fused_ordering(591) 00:11:15.425 fused_ordering(592) 00:11:15.425 fused_ordering(593) 00:11:15.425 fused_ordering(594) 00:11:15.425 fused_ordering(595) 00:11:15.425 fused_ordering(596) 00:11:15.425 fused_ordering(597) 00:11:15.425 fused_ordering(598) 00:11:15.425 fused_ordering(599) 00:11:15.425 fused_ordering(600) 00:11:15.425 fused_ordering(601) 00:11:15.425 fused_ordering(602) 00:11:15.425 fused_ordering(603) 00:11:15.425 fused_ordering(604) 00:11:15.425 fused_ordering(605) 00:11:15.425 fused_ordering(606) 00:11:15.425 fused_ordering(607) 00:11:15.425 fused_ordering(608) 00:11:15.425 fused_ordering(609) 00:11:15.425 fused_ordering(610) 00:11:15.425 fused_ordering(611) 00:11:15.425 fused_ordering(612) 00:11:15.425 fused_ordering(613) 00:11:15.425 fused_ordering(614) 00:11:15.425 fused_ordering(615) 00:11:15.993 fused_ordering(616) 00:11:15.993 fused_ordering(617) 00:11:15.993 fused_ordering(618) 00:11:15.993 fused_ordering(619) 00:11:15.993 fused_ordering(620) 00:11:15.993 fused_ordering(621) 00:11:15.993 fused_ordering(622) 00:11:15.993 fused_ordering(623) 00:11:15.993 fused_ordering(624) 00:11:15.993 fused_ordering(625) 00:11:15.993 fused_ordering(626) 00:11:15.993 fused_ordering(627) 00:11:15.993 fused_ordering(628) 00:11:15.993 fused_ordering(629) 00:11:15.993 fused_ordering(630) 00:11:15.993 fused_ordering(631) 00:11:15.993 fused_ordering(632) 00:11:15.993 fused_ordering(633) 00:11:15.993 fused_ordering(634) 00:11:15.993 fused_ordering(635) 00:11:15.993 fused_ordering(636) 00:11:15.993 fused_ordering(637) 00:11:15.993 fused_ordering(638) 00:11:15.993 fused_ordering(639) 00:11:15.993 fused_ordering(640) 00:11:15.993 fused_ordering(641) 00:11:15.993 fused_ordering(642) 00:11:15.993 fused_ordering(643) 00:11:15.993 fused_ordering(644) 00:11:15.993 fused_ordering(645) 00:11:15.993 fused_ordering(646) 00:11:15.993 fused_ordering(647) 00:11:15.993 fused_ordering(648) 00:11:15.993 fused_ordering(649) 00:11:15.993 fused_ordering(650) 00:11:15.993 fused_ordering(651) 00:11:15.993 fused_ordering(652) 00:11:15.993 fused_ordering(653) 00:11:15.993 fused_ordering(654) 00:11:15.993 fused_ordering(655) 00:11:15.993 fused_ordering(656) 00:11:15.993 fused_ordering(657) 00:11:15.993 fused_ordering(658) 00:11:15.993 fused_ordering(659) 00:11:15.993 fused_ordering(660) 00:11:15.993 fused_ordering(661) 00:11:15.993 fused_ordering(662) 00:11:15.993 fused_ordering(663) 00:11:15.993 fused_ordering(664) 00:11:15.993 fused_ordering(665) 00:11:15.993 fused_ordering(666) 00:11:15.993 fused_ordering(667) 00:11:15.994 fused_ordering(668) 00:11:15.994 fused_ordering(669) 00:11:15.994 fused_ordering(670) 00:11:15.994 fused_ordering(671) 00:11:15.994 fused_ordering(672) 00:11:15.994 fused_ordering(673) 00:11:15.994 fused_ordering(674) 00:11:15.994 fused_ordering(675) 00:11:15.994 fused_ordering(676) 00:11:15.994 fused_ordering(677) 00:11:15.994 fused_ordering(678) 00:11:15.994 fused_ordering(679) 00:11:15.994 fused_ordering(680) 00:11:15.994 fused_ordering(681) 00:11:15.994 fused_ordering(682) 00:11:15.994 fused_ordering(683) 00:11:15.994 fused_ordering(684) 00:11:15.994 fused_ordering(685) 00:11:15.994 fused_ordering(686) 00:11:15.994 fused_ordering(687) 00:11:15.994 fused_ordering(688) 00:11:15.994 fused_ordering(689) 00:11:15.994 fused_ordering(690) 00:11:15.994 fused_ordering(691) 00:11:15.994 fused_ordering(692) 00:11:15.994 fused_ordering(693) 00:11:15.994 fused_ordering(694) 00:11:15.994 fused_ordering(695) 00:11:15.994 fused_ordering(696) 00:11:15.994 fused_ordering(697) 00:11:15.994 fused_ordering(698) 00:11:15.994 fused_ordering(699) 00:11:15.994 fused_ordering(700) 00:11:15.994 fused_ordering(701) 00:11:15.994 fused_ordering(702) 00:11:15.994 fused_ordering(703) 00:11:15.994 fused_ordering(704) 00:11:15.994 fused_ordering(705) 00:11:15.994 fused_ordering(706) 00:11:15.994 fused_ordering(707) 00:11:15.994 fused_ordering(708) 00:11:15.994 fused_ordering(709) 00:11:15.994 fused_ordering(710) 00:11:15.994 fused_ordering(711) 00:11:15.994 fused_ordering(712) 00:11:15.994 fused_ordering(713) 00:11:15.994 fused_ordering(714) 00:11:15.994 fused_ordering(715) 00:11:15.994 fused_ordering(716) 00:11:15.994 fused_ordering(717) 00:11:15.994 fused_ordering(718) 00:11:15.994 fused_ordering(719) 00:11:15.994 fused_ordering(720) 00:11:15.994 fused_ordering(721) 00:11:15.994 fused_ordering(722) 00:11:15.994 fused_ordering(723) 00:11:15.994 fused_ordering(724) 00:11:15.994 fused_ordering(725) 00:11:15.994 fused_ordering(726) 00:11:15.994 fused_ordering(727) 00:11:15.994 fused_ordering(728) 00:11:15.994 fused_ordering(729) 00:11:15.994 fused_ordering(730) 00:11:15.994 fused_ordering(731) 00:11:15.994 fused_ordering(732) 00:11:15.994 fused_ordering(733) 00:11:15.994 fused_ordering(734) 00:11:15.994 fused_ordering(735) 00:11:15.994 fused_ordering(736) 00:11:15.994 fused_ordering(737) 00:11:15.994 fused_ordering(738) 00:11:15.994 fused_ordering(739) 00:11:15.994 fused_ordering(740) 00:11:15.994 fused_ordering(741) 00:11:15.994 fused_ordering(742) 00:11:15.994 fused_ordering(743) 00:11:15.994 fused_ordering(744) 00:11:15.994 fused_ordering(745) 00:11:15.994 fused_ordering(746) 00:11:15.994 fused_ordering(747) 00:11:15.994 fused_ordering(748) 00:11:15.994 fused_ordering(749) 00:11:15.994 fused_ordering(750) 00:11:15.994 fused_ordering(751) 00:11:15.994 fused_ordering(752) 00:11:15.994 fused_ordering(753) 00:11:15.994 fused_ordering(754) 00:11:15.994 fused_ordering(755) 00:11:15.994 fused_ordering(756) 00:11:15.994 fused_ordering(757) 00:11:15.994 fused_ordering(758) 00:11:15.994 fused_ordering(759) 00:11:15.994 fused_ordering(760) 00:11:15.994 fused_ordering(761) 00:11:15.994 fused_ordering(762) 00:11:15.994 fused_ordering(763) 00:11:15.994 fused_ordering(764) 00:11:15.994 fused_ordering(765) 00:11:15.994 fused_ordering(766) 00:11:15.994 fused_ordering(767) 00:11:15.994 fused_ordering(768) 00:11:15.994 fused_ordering(769) 00:11:15.994 fused_ordering(770) 00:11:15.994 fused_ordering(771) 00:11:15.994 fused_ordering(772) 00:11:15.994 fused_ordering(773) 00:11:15.994 fused_ordering(774) 00:11:15.994 fused_ordering(775) 00:11:15.994 fused_ordering(776) 00:11:15.994 fused_ordering(777) 00:11:15.994 fused_ordering(778) 00:11:15.994 fused_ordering(779) 00:11:15.994 fused_ordering(780) 00:11:15.994 fused_ordering(781) 00:11:15.994 fused_ordering(782) 00:11:15.994 fused_ordering(783) 00:11:15.994 fused_ordering(784) 00:11:15.994 fused_ordering(785) 00:11:15.994 fused_ordering(786) 00:11:15.994 fused_ordering(787) 00:11:15.994 fused_ordering(788) 00:11:15.994 fused_ordering(789) 00:11:15.994 fused_ordering(790) 00:11:15.994 fused_ordering(791) 00:11:15.994 fused_ordering(792) 00:11:15.994 fused_ordering(793) 00:11:15.994 fused_ordering(794) 00:11:15.994 fused_ordering(795) 00:11:15.994 fused_ordering(796) 00:11:15.994 fused_ordering(797) 00:11:15.994 fused_ordering(798) 00:11:15.994 fused_ordering(799) 00:11:15.994 fused_ordering(800) 00:11:15.994 fused_ordering(801) 00:11:15.994 fused_ordering(802) 00:11:15.994 fused_ordering(803) 00:11:15.994 fused_ordering(804) 00:11:15.994 fused_ordering(805) 00:11:15.994 fused_ordering(806) 00:11:15.994 fused_ordering(807) 00:11:15.994 fused_ordering(808) 00:11:15.994 fused_ordering(809) 00:11:15.994 fused_ordering(810) 00:11:15.994 fused_ordering(811) 00:11:15.994 fused_ordering(812) 00:11:15.994 fused_ordering(813) 00:11:15.994 fused_ordering(814) 00:11:15.994 fused_ordering(815) 00:11:15.994 fused_ordering(816) 00:11:15.994 fused_ordering(817) 00:11:15.994 fused_ordering(818) 00:11:15.994 fused_ordering(819) 00:11:15.994 fused_ordering(820) 00:11:16.563 fused_ordering(821) 00:11:16.563 fused_ordering(822) 00:11:16.563 fused_ordering(823) 00:11:16.563 fused_ordering(824) 00:11:16.563 fused_ordering(825) 00:11:16.563 fused_ordering(826) 00:11:16.563 fused_ordering(827) 00:11:16.563 fused_ordering(828) 00:11:16.563 fused_ordering(829) 00:11:16.563 fused_ordering(830) 00:11:16.563 fused_ordering(831) 00:11:16.563 fused_ordering(832) 00:11:16.563 fused_ordering(833) 00:11:16.563 fused_ordering(834) 00:11:16.563 fused_ordering(835) 00:11:16.563 fused_ordering(836) 00:11:16.563 fused_ordering(837) 00:11:16.563 fused_ordering(838) 00:11:16.563 fused_ordering(839) 00:11:16.563 fused_ordering(840) 00:11:16.563 fused_ordering(841) 00:11:16.563 fused_ordering(842) 00:11:16.563 fused_ordering(843) 00:11:16.563 fused_ordering(844) 00:11:16.563 fused_ordering(845) 00:11:16.563 fused_ordering(846) 00:11:16.563 fused_ordering(847) 00:11:16.563 fused_ordering(848) 00:11:16.563 fused_ordering(849) 00:11:16.563 fused_ordering(850) 00:11:16.563 fused_ordering(851) 00:11:16.563 fused_ordering(852) 00:11:16.563 fused_ordering(853) 00:11:16.563 fused_ordering(854) 00:11:16.563 fused_ordering(855) 00:11:16.563 fused_ordering(856) 00:11:16.563 fused_ordering(857) 00:11:16.563 fused_ordering(858) 00:11:16.563 fused_ordering(859) 00:11:16.563 fused_ordering(860) 00:11:16.563 fused_ordering(861) 00:11:16.563 fused_ordering(862) 00:11:16.563 fused_ordering(863) 00:11:16.563 fused_ordering(864) 00:11:16.563 fused_ordering(865) 00:11:16.563 fused_ordering(866) 00:11:16.563 fused_ordering(867) 00:11:16.563 fused_ordering(868) 00:11:16.563 fused_ordering(869) 00:11:16.563 fused_ordering(870) 00:11:16.563 fused_ordering(871) 00:11:16.563 fused_ordering(872) 00:11:16.563 fused_ordering(873) 00:11:16.563 fused_ordering(874) 00:11:16.563 fused_ordering(875) 00:11:16.563 fused_ordering(876) 00:11:16.563 fused_ordering(877) 00:11:16.563 fused_ordering(878) 00:11:16.563 fused_ordering(879) 00:11:16.563 fused_ordering(880) 00:11:16.563 fused_ordering(881) 00:11:16.563 fused_ordering(882) 00:11:16.563 fused_ordering(883) 00:11:16.563 fused_ordering(884) 00:11:16.563 fused_ordering(885) 00:11:16.563 fused_ordering(886) 00:11:16.563 fused_ordering(887) 00:11:16.563 fused_ordering(888) 00:11:16.563 fused_ordering(889) 00:11:16.563 fused_ordering(890) 00:11:16.563 fused_ordering(891) 00:11:16.563 fused_ordering(892) 00:11:16.563 fused_ordering(893) 00:11:16.563 fused_ordering(894) 00:11:16.563 fused_ordering(895) 00:11:16.563 fused_ordering(896) 00:11:16.563 fused_ordering(897) 00:11:16.563 fused_ordering(898) 00:11:16.563 fused_ordering(899) 00:11:16.563 fused_ordering(900) 00:11:16.563 fused_ordering(901) 00:11:16.563 fused_ordering(902) 00:11:16.563 fused_ordering(903) 00:11:16.563 fused_ordering(904) 00:11:16.563 fused_ordering(905) 00:11:16.563 fused_ordering(906) 00:11:16.563 fused_ordering(907) 00:11:16.563 fused_ordering(908) 00:11:16.563 fused_ordering(909) 00:11:16.563 fused_ordering(910) 00:11:16.563 fused_ordering(911) 00:11:16.563 fused_ordering(912) 00:11:16.563 fused_ordering(913) 00:11:16.563 fused_ordering(914) 00:11:16.563 fused_ordering(915) 00:11:16.563 fused_ordering(916) 00:11:16.563 fused_ordering(917) 00:11:16.563 fused_ordering(918) 00:11:16.563 fused_ordering(919) 00:11:16.563 fused_ordering(920) 00:11:16.563 fused_ordering(921) 00:11:16.563 fused_ordering(922) 00:11:16.563 fused_ordering(923) 00:11:16.563 fused_ordering(924) 00:11:16.563 fused_ordering(925) 00:11:16.563 fused_ordering(926) 00:11:16.563 fused_ordering(927) 00:11:16.563 fused_ordering(928) 00:11:16.563 fused_ordering(929) 00:11:16.563 fused_ordering(930) 00:11:16.563 fused_ordering(931) 00:11:16.563 fused_ordering(932) 00:11:16.563 fused_ordering(933) 00:11:16.563 fused_ordering(934) 00:11:16.563 fused_ordering(935) 00:11:16.563 fused_ordering(936) 00:11:16.563 fused_ordering(937) 00:11:16.563 fused_ordering(938) 00:11:16.563 fused_ordering(939) 00:11:16.563 fused_ordering(940) 00:11:16.563 fused_ordering(941) 00:11:16.563 fused_ordering(942) 00:11:16.563 fused_ordering(943) 00:11:16.563 fused_ordering(944) 00:11:16.563 fused_ordering(945) 00:11:16.563 fused_ordering(946) 00:11:16.563 fused_ordering(947) 00:11:16.563 fused_ordering(948) 00:11:16.563 fused_ordering(949) 00:11:16.563 fused_ordering(950) 00:11:16.563 fused_ordering(951) 00:11:16.563 fused_ordering(952) 00:11:16.563 fused_ordering(953) 00:11:16.563 fused_ordering(954) 00:11:16.563 fused_ordering(955) 00:11:16.563 fused_ordering(956) 00:11:16.563 fused_ordering(957) 00:11:16.563 fused_ordering(958) 00:11:16.563 fused_ordering(959) 00:11:16.563 fused_ordering(960) 00:11:16.563 fused_ordering(961) 00:11:16.563 fused_ordering(962) 00:11:16.563 fused_ordering(963) 00:11:16.563 fused_ordering(964) 00:11:16.563 fused_ordering(965) 00:11:16.563 fused_ordering(966) 00:11:16.563 fused_ordering(967) 00:11:16.563 fused_ordering(968) 00:11:16.563 fused_ordering(969) 00:11:16.563 fused_ordering(970) 00:11:16.563 fused_ordering(971) 00:11:16.563 fused_ordering(972) 00:11:16.563 fused_ordering(973) 00:11:16.563 fused_ordering(974) 00:11:16.563 fused_ordering(975) 00:11:16.563 fused_ordering(976) 00:11:16.563 fused_ordering(977) 00:11:16.563 fused_ordering(978) 00:11:16.563 fused_ordering(979) 00:11:16.563 fused_ordering(980) 00:11:16.563 fused_ordering(981) 00:11:16.563 fused_ordering(982) 00:11:16.563 fused_ordering(983) 00:11:16.563 fused_ordering(984) 00:11:16.563 fused_ordering(985) 00:11:16.563 fused_ordering(986) 00:11:16.563 fused_ordering(987) 00:11:16.563 fused_ordering(988) 00:11:16.563 fused_ordering(989) 00:11:16.563 fused_ordering(990) 00:11:16.563 fused_ordering(991) 00:11:16.563 fused_ordering(992) 00:11:16.563 fused_ordering(993) 00:11:16.563 fused_ordering(994) 00:11:16.563 fused_ordering(995) 00:11:16.563 fused_ordering(996) 00:11:16.563 fused_ordering(997) 00:11:16.563 fused_ordering(998) 00:11:16.563 fused_ordering(999) 00:11:16.563 fused_ordering(1000) 00:11:16.563 fused_ordering(1001) 00:11:16.563 fused_ordering(1002) 00:11:16.563 fused_ordering(1003) 00:11:16.563 fused_ordering(1004) 00:11:16.563 fused_ordering(1005) 00:11:16.563 fused_ordering(1006) 00:11:16.563 fused_ordering(1007) 00:11:16.563 fused_ordering(1008) 00:11:16.563 fused_ordering(1009) 00:11:16.563 fused_ordering(1010) 00:11:16.563 fused_ordering(1011) 00:11:16.563 fused_ordering(1012) 00:11:16.563 fused_ordering(1013) 00:11:16.563 fused_ordering(1014) 00:11:16.563 fused_ordering(1015) 00:11:16.563 fused_ordering(1016) 00:11:16.563 fused_ordering(1017) 00:11:16.564 fused_ordering(1018) 00:11:16.564 fused_ordering(1019) 00:11:16.564 fused_ordering(1020) 00:11:16.564 fused_ordering(1021) 00:11:16.564 fused_ordering(1022) 00:11:16.564 fused_ordering(1023) 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.564 rmmod nvme_tcp 00:11:16.564 rmmod nvme_fabrics 00:11:16.564 rmmod nvme_keyring 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3639515 ']' 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3639515 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3639515 ']' 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3639515 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3639515 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3639515' 00:11:16.564 killing process with pid 3639515 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3639515 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3639515 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.564 19:17:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:19.102 00:11:19.102 real 0m10.279s 00:11:19.102 user 0m5.441s 00:11:19.102 sys 0m5.100s 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:19.102 ************************************ 00:11:19.102 END TEST nvmf_fused_ordering 00:11:19.102 ************************************ 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:19.102 ************************************ 00:11:19.102 START TEST nvmf_ns_masking 00:11:19.102 ************************************ 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:19.102 * Looking for test storage... 00:11:19.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:19.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.102 --rc genhtml_branch_coverage=1 00:11:19.102 --rc genhtml_function_coverage=1 00:11:19.102 --rc genhtml_legend=1 00:11:19.102 --rc geninfo_all_blocks=1 00:11:19.102 --rc geninfo_unexecuted_blocks=1 00:11:19.102 00:11:19.102 ' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:19.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.102 --rc genhtml_branch_coverage=1 00:11:19.102 --rc genhtml_function_coverage=1 00:11:19.102 --rc genhtml_legend=1 00:11:19.102 --rc geninfo_all_blocks=1 00:11:19.102 --rc geninfo_unexecuted_blocks=1 00:11:19.102 00:11:19.102 ' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:19.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.102 --rc genhtml_branch_coverage=1 00:11:19.102 --rc genhtml_function_coverage=1 00:11:19.102 --rc genhtml_legend=1 00:11:19.102 --rc geninfo_all_blocks=1 00:11:19.102 --rc geninfo_unexecuted_blocks=1 00:11:19.102 00:11:19.102 ' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:19.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:19.102 --rc genhtml_branch_coverage=1 00:11:19.102 --rc genhtml_function_coverage=1 00:11:19.102 --rc genhtml_legend=1 00:11:19.102 --rc geninfo_all_blocks=1 00:11:19.102 --rc geninfo_unexecuted_blocks=1 00:11:19.102 00:11:19.102 ' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.102 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:19.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f7de303e-0e0d-478f-8ff3-ca0c19a33579 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c8b6710e-d281-4c44-b446-670f4bcb3369 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=bba02e31-a04d-4a13-91c8-bb00981c73f8 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:19.103 19:17:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:24.378 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:24.378 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:24.378 Found net devices under 0000:31:00.0: cvl_0_0 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:24.378 Found net devices under 0000:31:00.1: cvl_0_1 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:24.378 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.379 19:17:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:24.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.595 ms 00:11:24.379 00:11:24.379 --- 10.0.0.2 ping statistics --- 00:11:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.379 rtt min/avg/max/mdev = 0.595/0.595/0.595/0.000 ms 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:11:24.379 00:11:24.379 --- 10.0.0.1 ping statistics --- 00:11:24.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.379 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:24.379 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3644539 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3644539 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3644539 ']' 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.638 19:17:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:24.638 [2024-11-26 19:17:58.303242] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:11:24.639 [2024-11-26 19:17:58.303292] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.639 [2024-11-26 19:17:58.390187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.639 [2024-11-26 19:17:58.435561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.639 [2024-11-26 19:17:58.435609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.639 [2024-11-26 19:17:58.435618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.639 [2024-11-26 19:17:58.435626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.639 [2024-11-26 19:17:58.435632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.639 [2024-11-26 19:17:58.436408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:25.576 [2024-11-26 19:17:59.288682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:25.576 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:25.835 Malloc1 00:11:25.835 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:25.835 Malloc2 00:11:25.835 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:26.094 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:26.354 19:17:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.354 [2024-11-26 19:18:00.146230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.354 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:26.354 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bba02e31-a04d-4a13-91c8-bb00981c73f8 -a 10.0.0.2 -s 4420 -i 4 00:11:26.614 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.614 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:26.614 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.614 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:26.614 19:18:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:28.518 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:28.518 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:28.518 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:28.776 [ 0]:0x1 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fb82be4d9e1452990faf202aabd0f46 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fb82be4d9e1452990faf202aabd0f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:28.776 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:29.035 [ 0]:0x1 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fb82be4d9e1452990faf202aabd0f46 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fb82be4d9e1452990faf202aabd0f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:29.035 [ 1]:0x2 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf5e7809f618427e8c097aa25bb0e0bd 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf5e7809f618427e8c097aa25bb0e0bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.035 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.295 19:18:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:29.295 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:29.295 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bba02e31-a04d-4a13-91c8-bb00981c73f8 -a 10.0.0.2 -s 4420 -i 4 00:11:29.554 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:29.554 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:29.554 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.554 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:11:29.554 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:11:29.554 19:18:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:32.088 [ 0]:0x2 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf5e7809f618427e8c097aa25bb0e0bd 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf5e7809f618427e8c097aa25bb0e0bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:32.088 [ 0]:0x1 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:32.088 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fb82be4d9e1452990faf202aabd0f46 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fb82be4d9e1452990faf202aabd0f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:32.089 [ 1]:0x2 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf5e7809f618427e8c097aa25bb0e0bd 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf5e7809f618427e8c097aa25bb0e0bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:32.089 [ 0]:0x2 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf5e7809f618427e8c097aa25bb0e0bd 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf5e7809f618427e8c097aa25bb0e0bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:32.089 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.348 19:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:32.348 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:32.348 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I bba02e31-a04d-4a13-91c8-bb00981c73f8 -a 10.0.0.2 -s 4420 -i 4 00:11:32.608 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:32.608 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:32.608 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.608 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:32.608 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:32.608 19:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:34.696 [ 0]:0x1 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6fb82be4d9e1452990faf202aabd0f46 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6fb82be4d9e1452990faf202aabd0f46 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.696 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:34.956 [ 1]:0x2 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf5e7809f618427e8c097aa25bb0e0bd 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf5e7809f618427e8c097aa25bb0e0bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.956 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:35.216 [ 0]:0x2 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf5e7809f618427e8c097aa25bb0e0bd 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf5e7809f618427e8c097aa25bb0e0bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:35.216 19:18:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:35.216 [2024-11-26 19:18:09.026386] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:35.216 request: 00:11:35.216 { 00:11:35.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.216 "nsid": 2, 00:11:35.216 "host": "nqn.2016-06.io.spdk:host1", 00:11:35.216 "method": "nvmf_ns_remove_host", 00:11:35.216 "req_id": 1 00:11:35.216 } 00:11:35.216 Got JSON-RPC error response 00:11:35.216 response: 00:11:35.216 { 00:11:35.216 "code": -32602, 00:11:35.216 "message": "Invalid parameters" 00:11:35.216 } 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:35.216 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:35.475 [ 0]:0x2 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf5e7809f618427e8c097aa25bb0e0bd 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf5e7809f618427e8c097aa25bb0e0bd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3647148 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3647148 /var/tmp/host.sock 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3647148 ']' 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:35.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:35.475 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:35.475 [2024-11-26 19:18:09.190356] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:11:35.475 [2024-11-26 19:18:09.190405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647148 ] 00:11:35.475 [2024-11-26 19:18:09.268855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.475 [2024-11-26 19:18:09.305132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.413 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.413 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:36.413 19:18:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.413 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:36.672 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f7de303e-0e0d-478f-8ff3-ca0c19a33579 00:11:36.672 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:36.672 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F7DE303E0E0D478F8FF3CA0C19A33579 -i 00:11:36.672 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c8b6710e-d281-4c44-b446-670f4bcb3369 00:11:36.672 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:36.672 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C8B6710ED2814C44B446670F4BCB3369 -i 00:11:36.931 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:36.931 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:37.191 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:37.191 19:18:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:37.450 nvme0n1 00:11:37.450 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:37.450 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:38.018 nvme1n2 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:38.018 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:38.276 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f7de303e-0e0d-478f-8ff3-ca0c19a33579 == \f\7\d\e\3\0\3\e\-\0\e\0\d\-\4\7\8\f\-\8\f\f\3\-\c\a\0\c\1\9\a\3\3\5\7\9 ]] 00:11:38.276 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:38.276 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:38.276 19:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:38.276 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c8b6710e-d281-4c44-b446-670f4bcb3369 == \c\8\b\6\7\1\0\e\-\d\2\8\1\-\4\c\4\4\-\b\4\4\6\-\6\7\0\f\4\b\c\b\3\3\6\9 ]] 00:11:38.276 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.534 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f7de303e-0e0d-478f-8ff3-ca0c19a33579 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F7DE303E0E0D478F8FF3CA0C19A33579 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F7DE303E0E0D478F8FF3CA0C19A33579 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.792 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:38.793 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F7DE303E0E0D478F8FF3CA0C19A33579 00:11:38.793 [2024-11-26 19:18:12.551577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:11:38.793 [2024-11-26 19:18:12.551605] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:11:38.793 [2024-11-26 19:18:12.551613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:38.793 request: 00:11:38.793 { 00:11:38.793 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:38.793 "namespace": { 00:11:38.793 "bdev_name": "invalid", 00:11:38.793 "nsid": 1, 00:11:38.793 "nguid": "F7DE303E0E0D478F8FF3CA0C19A33579", 00:11:38.793 "no_auto_visible": false, 00:11:38.793 "hide_metadata": false 00:11:38.793 }, 00:11:38.793 "method": "nvmf_subsystem_add_ns", 00:11:38.793 "req_id": 1 00:11:38.793 } 00:11:38.793 Got JSON-RPC error response 00:11:38.793 response: 00:11:38.793 { 00:11:38.793 "code": -32602, 00:11:38.793 "message": "Invalid parameters" 00:11:38.793 } 00:11:38.793 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:11:38.793 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:38.793 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:38.793 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:38.793 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f7de303e-0e0d-478f-8ff3-ca0c19a33579 00:11:38.793 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:11:38.793 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F7DE303E0E0D478F8FF3CA0C19A33579 -i 00:11:39.052 19:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:11:40.954 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:11:40.954 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:11:40.954 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3647148 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3647148 ']' 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3647148 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3647148 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3647148' 00:11:41.213 killing process with pid 3647148 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3647148 00:11:41.213 19:18:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3647148 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.473 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.473 rmmod nvme_tcp 00:11:41.473 rmmod nvme_fabrics 00:11:41.732 rmmod nvme_keyring 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3644539 ']' 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3644539 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3644539 ']' 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3644539 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3644539 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3644539' 00:11:41.733 killing process with pid 3644539 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3644539 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3644539 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.733 19:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.266 00:11:44.266 real 0m25.139s 00:11:44.266 user 0m29.088s 00:11:44.266 sys 0m6.379s 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:44.266 ************************************ 00:11:44.266 END TEST nvmf_ns_masking 00:11:44.266 ************************************ 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.266 ************************************ 00:11:44.266 START TEST nvmf_nvme_cli 00:11:44.266 ************************************ 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:44.266 * Looking for test storage... 00:11:44.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:44.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.266 --rc genhtml_branch_coverage=1 00:11:44.266 --rc genhtml_function_coverage=1 00:11:44.266 --rc genhtml_legend=1 00:11:44.266 --rc geninfo_all_blocks=1 00:11:44.266 --rc geninfo_unexecuted_blocks=1 00:11:44.266 00:11:44.266 ' 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:44.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.266 --rc genhtml_branch_coverage=1 00:11:44.266 --rc genhtml_function_coverage=1 00:11:44.266 --rc genhtml_legend=1 00:11:44.266 --rc geninfo_all_blocks=1 00:11:44.266 --rc geninfo_unexecuted_blocks=1 00:11:44.266 00:11:44.266 ' 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:44.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.266 --rc genhtml_branch_coverage=1 00:11:44.266 --rc genhtml_function_coverage=1 00:11:44.266 --rc genhtml_legend=1 00:11:44.266 --rc geninfo_all_blocks=1 00:11:44.266 --rc geninfo_unexecuted_blocks=1 00:11:44.266 00:11:44.266 ' 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:44.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.266 --rc genhtml_branch_coverage=1 00:11:44.266 --rc genhtml_function_coverage=1 00:11:44.266 --rc genhtml_legend=1 00:11:44.266 --rc geninfo_all_blocks=1 00:11:44.266 --rc geninfo_unexecuted_blocks=1 00:11:44.266 00:11:44.266 ' 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.266 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.267 19:18:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:49.535 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:49.535 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:49.535 Found net devices under 0000:31:00.0: cvl_0_0 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:49.535 Found net devices under 0000:31:00.1: cvl_0_1 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.535 19:18:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.535 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.535 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:11:49.536 00:11:49.536 --- 10.0.0.2 ping statistics --- 00:11:49.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.536 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:11:49.536 00:11:49.536 --- 10.0.0.1 ping statistics --- 00:11:49.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.536 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3653318 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3653318 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3653318 ']' 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:49.536 19:18:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:49.536 [2024-11-26 19:18:23.219416] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:11:49.536 [2024-11-26 19:18:23.219467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.536 [2024-11-26 19:18:23.305660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:49.536 [2024-11-26 19:18:23.353499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:49.536 [2024-11-26 19:18:23.353544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:49.536 [2024-11-26 19:18:23.353553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:49.536 [2024-11-26 19:18:23.353560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:49.536 [2024-11-26 19:18:23.353566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:49.536 [2024-11-26 19:18:23.355793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:49.536 [2024-11-26 19:18:23.355954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:49.536 [2024-11-26 19:18:23.356133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.536 [2024-11-26 19:18:23.356134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.473 [2024-11-26 19:18:24.060940] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.473 Malloc0 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.473 Malloc1 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.473 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.474 [2024-11-26 19:18:24.150827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -a 10.0.0.2 -s 4420 00:11:50.474 00:11:50.474 Discovery Log Number of Records 2, Generation counter 2 00:11:50.474 =====Discovery Log Entry 0====== 00:11:50.474 trtype: tcp 00:11:50.474 adrfam: ipv4 00:11:50.474 subtype: current discovery subsystem 00:11:50.474 treq: not required 00:11:50.474 portid: 0 00:11:50.474 trsvcid: 4420 00:11:50.474 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:50.474 traddr: 10.0.0.2 00:11:50.474 eflags: explicit discovery connections, duplicate discovery information 00:11:50.474 sectype: none 00:11:50.474 =====Discovery Log Entry 1====== 00:11:50.474 trtype: tcp 00:11:50.474 adrfam: ipv4 00:11:50.474 subtype: nvme subsystem 00:11:50.474 treq: not required 00:11:50.474 portid: 0 00:11:50.474 trsvcid: 4420 00:11:50.474 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:50.474 traddr: 10.0.0.2 00:11:50.474 eflags: none 00:11:50.474 sectype: none 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:50.474 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:50.733 19:18:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:52.110 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:52.110 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:11:52.110 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:52.110 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:11:52.110 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:11:52.110 19:18:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:11:54.647 /dev/nvme0n2 ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:54.647 19:18:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:54.647 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.648 rmmod nvme_tcp 00:11:54.648 rmmod nvme_fabrics 00:11:54.648 rmmod nvme_keyring 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3653318 ']' 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3653318 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3653318 ']' 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3653318 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3653318 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3653318' 00:11:54.648 killing process with pid 3653318 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3653318 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3653318 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.648 19:18:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.554 00:11:56.554 real 0m12.708s 00:11:56.554 user 0m21.134s 00:11:56.554 sys 0m4.698s 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.554 ************************************ 00:11:56.554 END TEST nvmf_nvme_cli 00:11:56.554 ************************************ 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.554 ************************************ 00:11:56.554 START TEST nvmf_vfio_user 00:11:56.554 ************************************ 00:11:56.554 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:11:56.814 * Looking for test storage... 00:11:56.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:56.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.814 --rc genhtml_branch_coverage=1 00:11:56.814 --rc genhtml_function_coverage=1 00:11:56.814 --rc genhtml_legend=1 00:11:56.814 --rc geninfo_all_blocks=1 00:11:56.814 --rc geninfo_unexecuted_blocks=1 00:11:56.814 00:11:56.814 ' 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:56.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.814 --rc genhtml_branch_coverage=1 00:11:56.814 --rc genhtml_function_coverage=1 00:11:56.814 --rc genhtml_legend=1 00:11:56.814 --rc geninfo_all_blocks=1 00:11:56.814 --rc geninfo_unexecuted_blocks=1 00:11:56.814 00:11:56.814 ' 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:56.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.814 --rc genhtml_branch_coverage=1 00:11:56.814 --rc genhtml_function_coverage=1 00:11:56.814 --rc genhtml_legend=1 00:11:56.814 --rc geninfo_all_blocks=1 00:11:56.814 --rc geninfo_unexecuted_blocks=1 00:11:56.814 00:11:56.814 ' 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:56.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.814 --rc genhtml_branch_coverage=1 00:11:56.814 --rc genhtml_function_coverage=1 00:11:56.814 --rc genhtml_legend=1 00:11:56.814 --rc geninfo_all_blocks=1 00:11:56.814 --rc geninfo_unexecuted_blocks=1 00:11:56.814 00:11:56.814 ' 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.814 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3655126 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3655126' 00:11:56.815 Process pid: 3655126 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3655126 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3655126 ']' 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:11:56.815 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:11:56.815 [2024-11-26 19:18:30.563650] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:11:56.815 [2024-11-26 19:18:30.563703] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.815 [2024-11-26 19:18:30.628661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.815 [2024-11-26 19:18:30.658770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.815 [2024-11-26 19:18:30.658797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.815 [2024-11-26 19:18:30.658803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.815 [2024-11-26 19:18:30.658808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.815 [2024-11-26 19:18:30.658812] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.815 [2024-11-26 19:18:30.660073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.815 [2024-11-26 19:18:30.660225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.815 [2024-11-26 19:18:30.660456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.815 [2024-11-26 19:18:30.660455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.074 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.074 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:11:57.074 19:18:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:11:58.012 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:11:58.270 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:11:58.270 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:11:58.270 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:58.270 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:11:58.270 19:18:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:58.270 Malloc1 00:11:58.270 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:11:58.529 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:11:58.788 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:11:58.788 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:58.788 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:11:58.788 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:59.047 Malloc2 00:11:59.047 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:11:59.047 19:18:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:11:59.307 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:11:59.568 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:11:59.568 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:11:59.568 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:11:59.568 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:11:59.568 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:11:59.568 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:11:59.568 [2024-11-26 19:18:33.228961] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:11:59.568 [2024-11-26 19:18:33.228991] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3655793 ] 00:11:59.568 [2024-11-26 19:18:33.266417] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:11:59.568 [2024-11-26 19:18:33.271669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:59.568 [2024-11-26 19:18:33.271687] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7cf990f000 00:11:59.568 [2024-11-26 19:18:33.272665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.568 [2024-11-26 19:18:33.273665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.568 [2024-11-26 19:18:33.274669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.568 [2024-11-26 19:18:33.275677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.568 [2024-11-26 19:18:33.276689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.568 [2024-11-26 19:18:33.277683] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.568 [2024-11-26 19:18:33.278689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:11:59.568 [2024-11-26 19:18:33.279698] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:11:59.568 [2024-11-26 19:18:33.280702] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:11:59.568 [2024-11-26 19:18:33.280708] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7cf9904000 00:11:59.569 [2024-11-26 19:18:33.281622] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:59.569 [2024-11-26 19:18:33.291068] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:11:59.569 [2024-11-26 19:18:33.291088] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:11:59.569 [2024-11-26 19:18:33.295795] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:59.569 [2024-11-26 19:18:33.295829] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:11:59.569 [2024-11-26 19:18:33.295894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:11:59.569 [2024-11-26 19:18:33.295910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:11:59.569 [2024-11-26 19:18:33.295914] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:11:59.569 [2024-11-26 19:18:33.296788] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:11:59.569 [2024-11-26 19:18:33.296797] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:11:59.569 [2024-11-26 19:18:33.296802] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:11:59.569 [2024-11-26 19:18:33.297795] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:11:59.569 [2024-11-26 19:18:33.297801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:11:59.569 [2024-11-26 19:18:33.297807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:11:59.569 [2024-11-26 19:18:33.302104] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:11:59.569 [2024-11-26 19:18:33.302111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:11:59.569 [2024-11-26 19:18:33.302825] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:11:59.569 [2024-11-26 19:18:33.302831] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:11:59.569 [2024-11-26 19:18:33.302834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:11:59.569 [2024-11-26 19:18:33.302839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:11:59.569 [2024-11-26 19:18:33.302946] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:11:59.569 [2024-11-26 19:18:33.302949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:11:59.569 [2024-11-26 19:18:33.302953] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:11:59.569 [2024-11-26 19:18:33.303830] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:11:59.569 [2024-11-26 19:18:33.304833] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:11:59.569 [2024-11-26 19:18:33.305841] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:59.569 [2024-11-26 19:18:33.306837] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:11:59.569 [2024-11-26 19:18:33.306885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:11:59.569 [2024-11-26 19:18:33.307851] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:11:59.569 [2024-11-26 19:18:33.307857] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:11:59.569 [2024-11-26 19:18:33.307860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.307875] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:11:59.569 [2024-11-26 19:18:33.307881] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.307897] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.569 [2024-11-26 19:18:33.307901] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.569 [2024-11-26 19:18:33.307904] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:59.569 [2024-11-26 19:18:33.307915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.569 [2024-11-26 19:18:33.307956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:11:59.569 [2024-11-26 19:18:33.307964] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:11:59.569 [2024-11-26 19:18:33.307968] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:11:59.569 [2024-11-26 19:18:33.307971] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:11:59.569 [2024-11-26 19:18:33.307975] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:11:59.569 [2024-11-26 19:18:33.307978] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:11:59.569 [2024-11-26 19:18:33.307981] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:11:59.569 [2024-11-26 19:18:33.307985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.307991] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.307999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:11:59.569 [2024-11-26 19:18:33.308008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:11:59.569 [2024-11-26 19:18:33.308017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.569 [2024-11-26 19:18:33.308023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.569 [2024-11-26 19:18:33.308029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.569 [2024-11-26 19:18:33.308037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:59.569 [2024-11-26 19:18:33.308040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308053] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:11:59.569 [2024-11-26 19:18:33.308063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:11:59.569 [2024-11-26 19:18:33.308067] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:11:59.569 [2024-11-26 19:18:33.308071] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:59.569 [2024-11-26 19:18:33.308103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:11:59.569 [2024-11-26 19:18:33.308147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308159] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:11:59.569 [2024-11-26 19:18:33.308162] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:11:59.569 [2024-11-26 19:18:33.308164] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:59.569 [2024-11-26 19:18:33.308169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:11:59.569 [2024-11-26 19:18:33.308182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:11:59.569 [2024-11-26 19:18:33.308192] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:11:59.569 [2024-11-26 19:18:33.308198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:11:59.569 [2024-11-26 19:18:33.308209] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.569 [2024-11-26 19:18:33.308212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.569 [2024-11-26 19:18:33.308214] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:59.570 [2024-11-26 19:18:33.308219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308257] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:11:59.570 [2024-11-26 19:18:33.308260] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.570 [2024-11-26 19:18:33.308262] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:59.570 [2024-11-26 19:18:33.308266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308314] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:11:59.570 [2024-11-26 19:18:33.308317] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:11:59.570 [2024-11-26 19:18:33.308321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:11:59.570 [2024-11-26 19:18:33.308335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308365] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308399] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:11:59.570 [2024-11-26 19:18:33.308403] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:11:59.570 [2024-11-26 19:18:33.308406] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:11:59.570 [2024-11-26 19:18:33.308408] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:11:59.570 [2024-11-26 19:18:33.308411] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:11:59.570 [2024-11-26 19:18:33.308415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:11:59.570 [2024-11-26 19:18:33.308421] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:11:59.570 [2024-11-26 19:18:33.308424] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:11:59.570 [2024-11-26 19:18:33.308426] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:59.570 [2024-11-26 19:18:33.308431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308436] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:11:59.570 [2024-11-26 19:18:33.308439] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:11:59.570 [2024-11-26 19:18:33.308441] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:59.570 [2024-11-26 19:18:33.308446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308451] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:11:59.570 [2024-11-26 19:18:33.308454] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:11:59.570 [2024-11-26 19:18:33.308457] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:11:59.570 [2024-11-26 19:18:33.308461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:11:59.570 [2024-11-26 19:18:33.308466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:11:59.570 [2024-11-26 19:18:33.308489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:11:59.570 ===================================================== 00:11:59.570 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:11:59.570 ===================================================== 00:11:59.570 Controller Capabilities/Features 00:11:59.570 ================================ 00:11:59.570 Vendor ID: 4e58 00:11:59.570 Subsystem Vendor ID: 4e58 00:11:59.570 Serial Number: SPDK1 00:11:59.570 Model Number: SPDK bdev Controller 00:11:59.570 Firmware Version: 25.01 00:11:59.570 Recommended Arb Burst: 6 00:11:59.570 IEEE OUI Identifier: 8d 6b 50 00:11:59.570 Multi-path I/O 00:11:59.570 May have multiple subsystem ports: Yes 00:11:59.570 May have multiple controllers: Yes 00:11:59.570 Associated with SR-IOV VF: No 00:11:59.570 Max Data Transfer Size: 131072 00:11:59.570 Max Number of Namespaces: 32 00:11:59.570 Max Number of I/O Queues: 127 00:11:59.570 NVMe Specification Version (VS): 1.3 00:11:59.570 NVMe Specification Version (Identify): 1.3 00:11:59.570 Maximum Queue Entries: 256 00:11:59.570 Contiguous Queues Required: Yes 00:11:59.570 Arbitration Mechanisms Supported 00:11:59.570 Weighted Round Robin: Not Supported 00:11:59.570 Vendor Specific: Not Supported 00:11:59.570 Reset Timeout: 15000 ms 00:11:59.570 Doorbell Stride: 4 bytes 00:11:59.570 NVM Subsystem Reset: Not Supported 00:11:59.570 Command Sets Supported 00:11:59.570 NVM Command Set: Supported 00:11:59.570 Boot Partition: Not Supported 00:11:59.570 Memory Page Size Minimum: 4096 bytes 00:11:59.570 Memory Page Size Maximum: 4096 bytes 00:11:59.570 Persistent Memory Region: Not Supported 00:11:59.570 Optional Asynchronous Events Supported 00:11:59.570 Namespace Attribute Notices: Supported 00:11:59.570 Firmware Activation Notices: Not Supported 00:11:59.570 ANA Change Notices: Not Supported 00:11:59.570 PLE Aggregate Log Change Notices: Not Supported 00:11:59.570 LBA Status Info Alert Notices: Not Supported 00:11:59.570 EGE Aggregate Log Change Notices: Not Supported 00:11:59.570 Normal NVM Subsystem Shutdown event: Not Supported 00:11:59.570 Zone Descriptor Change Notices: Not Supported 00:11:59.570 Discovery Log Change Notices: Not Supported 00:11:59.570 Controller Attributes 00:11:59.570 128-bit Host Identifier: Supported 00:11:59.570 Non-Operational Permissive Mode: Not Supported 00:11:59.570 NVM Sets: Not Supported 00:11:59.570 Read Recovery Levels: Not Supported 00:11:59.570 Endurance Groups: Not Supported 00:11:59.570 Predictable Latency Mode: Not Supported 00:11:59.570 Traffic Based Keep ALive: Not Supported 00:11:59.570 Namespace Granularity: Not Supported 00:11:59.570 SQ Associations: Not Supported 00:11:59.570 UUID List: Not Supported 00:11:59.570 Multi-Domain Subsystem: Not Supported 00:11:59.570 Fixed Capacity Management: Not Supported 00:11:59.570 Variable Capacity Management: Not Supported 00:11:59.570 Delete Endurance Group: Not Supported 00:11:59.570 Delete NVM Set: Not Supported 00:11:59.570 Extended LBA Formats Supported: Not Supported 00:11:59.570 Flexible Data Placement Supported: Not Supported 00:11:59.570 00:11:59.570 Controller Memory Buffer Support 00:11:59.570 ================================ 00:11:59.570 Supported: No 00:11:59.570 00:11:59.570 Persistent Memory Region Support 00:11:59.570 ================================ 00:11:59.570 Supported: No 00:11:59.570 00:11:59.570 Admin Command Set Attributes 00:11:59.570 ============================ 00:11:59.570 Security Send/Receive: Not Supported 00:11:59.570 Format NVM: Not Supported 00:11:59.570 Firmware Activate/Download: Not Supported 00:11:59.570 Namespace Management: Not Supported 00:11:59.570 Device Self-Test: Not Supported 00:11:59.570 Directives: Not Supported 00:11:59.570 NVMe-MI: Not Supported 00:11:59.570 Virtualization Management: Not Supported 00:11:59.570 Doorbell Buffer Config: Not Supported 00:11:59.571 Get LBA Status Capability: Not Supported 00:11:59.571 Command & Feature Lockdown Capability: Not Supported 00:11:59.571 Abort Command Limit: 4 00:11:59.571 Async Event Request Limit: 4 00:11:59.571 Number of Firmware Slots: N/A 00:11:59.571 Firmware Slot 1 Read-Only: N/A 00:11:59.571 Firmware Activation Without Reset: N/A 00:11:59.571 Multiple Update Detection Support: N/A 00:11:59.571 Firmware Update Granularity: No Information Provided 00:11:59.571 Per-Namespace SMART Log: No 00:11:59.571 Asymmetric Namespace Access Log Page: Not Supported 00:11:59.571 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:11:59.571 Command Effects Log Page: Supported 00:11:59.571 Get Log Page Extended Data: Supported 00:11:59.571 Telemetry Log Pages: Not Supported 00:11:59.571 Persistent Event Log Pages: Not Supported 00:11:59.571 Supported Log Pages Log Page: May Support 00:11:59.571 Commands Supported & Effects Log Page: Not Supported 00:11:59.571 Feature Identifiers & Effects Log Page:May Support 00:11:59.571 NVMe-MI Commands & Effects Log Page: May Support 00:11:59.571 Data Area 4 for Telemetry Log: Not Supported 00:11:59.571 Error Log Page Entries Supported: 128 00:11:59.571 Keep Alive: Supported 00:11:59.571 Keep Alive Granularity: 10000 ms 00:11:59.571 00:11:59.571 NVM Command Set Attributes 00:11:59.571 ========================== 00:11:59.571 Submission Queue Entry Size 00:11:59.571 Max: 64 00:11:59.571 Min: 64 00:11:59.571 Completion Queue Entry Size 00:11:59.571 Max: 16 00:11:59.571 Min: 16 00:11:59.571 Number of Namespaces: 32 00:11:59.571 Compare Command: Supported 00:11:59.571 Write Uncorrectable Command: Not Supported 00:11:59.571 Dataset Management Command: Supported 00:11:59.571 Write Zeroes Command: Supported 00:11:59.571 Set Features Save Field: Not Supported 00:11:59.571 Reservations: Not Supported 00:11:59.571 Timestamp: Not Supported 00:11:59.571 Copy: Supported 00:11:59.571 Volatile Write Cache: Present 00:11:59.571 Atomic Write Unit (Normal): 1 00:11:59.571 Atomic Write Unit (PFail): 1 00:11:59.571 Atomic Compare & Write Unit: 1 00:11:59.571 Fused Compare & Write: Supported 00:11:59.571 Scatter-Gather List 00:11:59.571 SGL Command Set: Supported (Dword aligned) 00:11:59.571 SGL Keyed: Not Supported 00:11:59.571 SGL Bit Bucket Descriptor: Not Supported 00:11:59.571 SGL Metadata Pointer: Not Supported 00:11:59.571 Oversized SGL: Not Supported 00:11:59.571 SGL Metadata Address: Not Supported 00:11:59.571 SGL Offset: Not Supported 00:11:59.571 Transport SGL Data Block: Not Supported 00:11:59.571 Replay Protected Memory Block: Not Supported 00:11:59.571 00:11:59.571 Firmware Slot Information 00:11:59.571 ========================= 00:11:59.571 Active slot: 1 00:11:59.571 Slot 1 Firmware Revision: 25.01 00:11:59.571 00:11:59.571 00:11:59.571 Commands Supported and Effects 00:11:59.571 ============================== 00:11:59.571 Admin Commands 00:11:59.571 -------------- 00:11:59.571 Get Log Page (02h): Supported 00:11:59.571 Identify (06h): Supported 00:11:59.571 Abort (08h): Supported 00:11:59.571 Set Features (09h): Supported 00:11:59.571 Get Features (0Ah): Supported 00:11:59.571 Asynchronous Event Request (0Ch): Supported 00:11:59.571 Keep Alive (18h): Supported 00:11:59.571 I/O Commands 00:11:59.571 ------------ 00:11:59.571 Flush (00h): Supported LBA-Change 00:11:59.571 Write (01h): Supported LBA-Change 00:11:59.571 Read (02h): Supported 00:11:59.571 Compare (05h): Supported 00:11:59.571 Write Zeroes (08h): Supported LBA-Change 00:11:59.571 Dataset Management (09h): Supported LBA-Change 00:11:59.571 Copy (19h): Supported LBA-Change 00:11:59.571 00:11:59.571 Error Log 00:11:59.571 ========= 00:11:59.571 00:11:59.571 Arbitration 00:11:59.571 =========== 00:11:59.571 Arbitration Burst: 1 00:11:59.571 00:11:59.571 Power Management 00:11:59.571 ================ 00:11:59.571 Number of Power States: 1 00:11:59.571 Current Power State: Power State #0 00:11:59.571 Power State #0: 00:11:59.571 Max Power: 0.00 W 00:11:59.571 Non-Operational State: Operational 00:11:59.571 Entry Latency: Not Reported 00:11:59.571 Exit Latency: Not Reported 00:11:59.571 Relative Read Throughput: 0 00:11:59.571 Relative Read Latency: 0 00:11:59.571 Relative Write Throughput: 0 00:11:59.571 Relative Write Latency: 0 00:11:59.571 Idle Power: Not Reported 00:11:59.571 Active Power: Not Reported 00:11:59.571 Non-Operational Permissive Mode: Not Supported 00:11:59.571 00:11:59.571 Health Information 00:11:59.571 ================== 00:11:59.571 Critical Warnings: 00:11:59.571 Available Spare Space: OK 00:11:59.571 Temperature: OK 00:11:59.571 Device Reliability: OK 00:11:59.571 Read Only: No 00:11:59.571 Volatile Memory Backup: OK 00:11:59.571 Current Temperature: 0 Kelvin (-273 Celsius) 00:11:59.571 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:11:59.571 Available Spare: 0% 00:11:59.571 Available Sp[2024-11-26 19:18:33.308561] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:11:59.571 [2024-11-26 19:18:33.308569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:11:59.571 [2024-11-26 19:18:33.308591] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:11:59.571 [2024-11-26 19:18:33.308598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.571 [2024-11-26 19:18:33.308602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.571 [2024-11-26 19:18:33.308607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.571 [2024-11-26 19:18:33.308611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:59.571 [2024-11-26 19:18:33.308856] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:11:59.571 [2024-11-26 19:18:33.308863] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:11:59.571 [2024-11-26 19:18:33.309856] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:11:59.571 [2024-11-26 19:18:33.309895] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:11:59.571 [2024-11-26 19:18:33.309900] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:11:59.571 [2024-11-26 19:18:33.310863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:11:59.571 [2024-11-26 19:18:33.310871] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:11:59.571 [2024-11-26 19:18:33.310920] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:11:59.571 [2024-11-26 19:18:33.311886] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:11:59.571 are Threshold: 0% 00:11:59.571 Life Percentage Used: 0% 00:11:59.571 Data Units Read: 0 00:11:59.571 Data Units Written: 0 00:11:59.571 Host Read Commands: 0 00:11:59.571 Host Write Commands: 0 00:11:59.571 Controller Busy Time: 0 minutes 00:11:59.571 Power Cycles: 0 00:11:59.571 Power On Hours: 0 hours 00:11:59.571 Unsafe Shutdowns: 0 00:11:59.571 Unrecoverable Media Errors: 0 00:11:59.571 Lifetime Error Log Entries: 0 00:11:59.571 Warning Temperature Time: 0 minutes 00:11:59.571 Critical Temperature Time: 0 minutes 00:11:59.571 00:11:59.571 Number of Queues 00:11:59.571 ================ 00:11:59.571 Number of I/O Submission Queues: 127 00:11:59.571 Number of I/O Completion Queues: 127 00:11:59.571 00:11:59.571 Active Namespaces 00:11:59.571 ================= 00:11:59.571 Namespace ID:1 00:11:59.571 Error Recovery Timeout: Unlimited 00:11:59.571 Command Set Identifier: NVM (00h) 00:11:59.571 Deallocate: Supported 00:11:59.571 Deallocated/Unwritten Error: Not Supported 00:11:59.571 Deallocated Read Value: Unknown 00:11:59.571 Deallocate in Write Zeroes: Not Supported 00:11:59.571 Deallocated Guard Field: 0xFFFF 00:11:59.571 Flush: Supported 00:11:59.571 Reservation: Supported 00:11:59.571 Namespace Sharing Capabilities: Multiple Controllers 00:11:59.571 Size (in LBAs): 131072 (0GiB) 00:11:59.571 Capacity (in LBAs): 131072 (0GiB) 00:11:59.571 Utilization (in LBAs): 131072 (0GiB) 00:11:59.571 NGUID: 6824E3C177E74722A9530F6AA3094D1A 00:11:59.571 UUID: 6824e3c1-77e7-4722-a953-0f6aa3094d1a 00:11:59.571 Thin Provisioning: Not Supported 00:11:59.571 Per-NS Atomic Units: Yes 00:11:59.571 Atomic Boundary Size (Normal): 0 00:11:59.571 Atomic Boundary Size (PFail): 0 00:11:59.571 Atomic Boundary Offset: 0 00:11:59.571 Maximum Single Source Range Length: 65535 00:11:59.571 Maximum Copy Length: 65535 00:11:59.571 Maximum Source Range Count: 1 00:11:59.571 NGUID/EUI64 Never Reused: No 00:11:59.571 Namespace Write Protected: No 00:11:59.571 Number of LBA Formats: 1 00:11:59.571 Current LBA Format: LBA Format #00 00:11:59.572 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:59.572 00:11:59.572 19:18:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:11:59.831 [2024-11-26 19:18:33.476723] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:05.101 Initializing NVMe Controllers 00:12:05.101 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:05.101 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:05.101 Initialization complete. Launching workers. 00:12:05.101 ======================================================== 00:12:05.101 Latency(us) 00:12:05.101 Device Information : IOPS MiB/s Average min max 00:12:05.101 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40042.72 156.42 3196.26 866.37 7694.10 00:12:05.101 ======================================================== 00:12:05.101 Total : 40042.72 156.42 3196.26 866.37 7694.10 00:12:05.101 00:12:05.101 [2024-11-26 19:18:38.493502] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:05.101 19:18:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:05.101 [2024-11-26 19:18:38.673312] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:10.371 Initializing NVMe Controllers 00:12:10.371 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:10.371 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:10.371 Initialization complete. Launching workers. 00:12:10.371 ======================================================== 00:12:10.371 Latency(us) 00:12:10.371 Device Information : IOPS MiB/s Average min max 00:12:10.371 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16047.21 62.68 7975.94 4988.79 9977.53 00:12:10.371 ======================================================== 00:12:10.371 Total : 16047.21 62.68 7975.94 4988.79 9977.53 00:12:10.371 00:12:10.371 [2024-11-26 19:18:43.704556] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:10.371 19:18:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:10.371 [2024-11-26 19:18:43.908418] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:15.644 [2024-11-26 19:18:48.973292] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:15.644 Initializing NVMe Controllers 00:12:15.644 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:15.644 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:15.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:15.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:15.644 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:15.644 Initialization complete. Launching workers. 00:12:15.644 Starting thread on core 2 00:12:15.644 Starting thread on core 3 00:12:15.644 Starting thread on core 1 00:12:15.644 19:18:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:15.644 [2024-11-26 19:18:49.216410] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:18.932 [2024-11-26 19:18:52.271541] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:18.932 Initializing NVMe Controllers 00:12:18.932 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:18.932 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:18.932 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:18.932 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:18.932 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:18.932 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:18.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:18.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:18.932 Initialization complete. Launching workers. 00:12:18.932 Starting thread on core 1 with urgent priority queue 00:12:18.932 Starting thread on core 2 with urgent priority queue 00:12:18.933 Starting thread on core 3 with urgent priority queue 00:12:18.933 Starting thread on core 0 with urgent priority queue 00:12:18.933 SPDK bdev Controller (SPDK1 ) core 0: 12741.00 IO/s 7.85 secs/100000 ios 00:12:18.933 SPDK bdev Controller (SPDK1 ) core 1: 11656.00 IO/s 8.58 secs/100000 ios 00:12:18.933 SPDK bdev Controller (SPDK1 ) core 2: 12536.67 IO/s 7.98 secs/100000 ios 00:12:18.933 SPDK bdev Controller (SPDK1 ) core 3: 11485.00 IO/s 8.71 secs/100000 ios 00:12:18.933 ======================================================== 00:12:18.933 00:12:18.933 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:18.933 [2024-11-26 19:18:52.508858] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:18.933 Initializing NVMe Controllers 00:12:18.933 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:18.933 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:18.933 Namespace ID: 1 size: 0GB 00:12:18.933 Initialization complete. 00:12:18.933 INFO: using host memory buffer for IO 00:12:18.933 Hello world! 00:12:18.933 [2024-11-26 19:18:52.542062] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:18.933 19:18:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:18.933 [2024-11-26 19:18:52.769519] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:20.311 Initializing NVMe Controllers 00:12:20.311 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.311 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.311 Initialization complete. Launching workers. 00:12:20.311 submit (in ns) avg, min, max = 4974.9, 2823.3, 3997180.0 00:12:20.311 complete (in ns) avg, min, max = 17742.0, 1669.2, 3997505.8 00:12:20.311 00:12:20.311 Submit histogram 00:12:20.311 ================ 00:12:20.311 Range in us Cumulative Count 00:12:20.311 2.813 - 2.827: 0.0400% ( 8) 00:12:20.311 2.827 - 2.840: 0.6305% ( 118) 00:12:20.311 2.840 - 2.853: 1.8915% ( 252) 00:12:20.311 2.853 - 2.867: 3.6980% ( 361) 00:12:20.311 2.867 - 2.880: 7.0606% ( 672) 00:12:20.311 2.880 - 2.893: 11.8745% ( 962) 00:12:20.311 2.893 - 2.907: 17.0436% ( 1033) 00:12:20.311 2.907 - 2.920: 23.0885% ( 1208) 00:12:20.311 2.920 - 2.933: 29.3385% ( 1249) 00:12:20.311 2.933 - 2.947: 35.5134% ( 1234) 00:12:20.311 2.947 - 2.960: 42.0536% ( 1307) 00:12:20.311 2.960 - 2.973: 49.0042% ( 1389) 00:12:20.311 2.973 - 2.987: 56.9806% ( 1594) 00:12:20.311 2.987 - 3.000: 64.9770% ( 1598) 00:12:20.311 3.000 - 3.013: 73.9392% ( 1791) 00:12:20.311 3.013 - 3.027: 82.2108% ( 1653) 00:12:20.311 3.027 - 3.040: 88.4057% ( 1238) 00:12:20.311 3.040 - 3.053: 93.0344% ( 925) 00:12:20.311 3.053 - 3.067: 95.4464% ( 482) 00:12:20.311 3.067 - 3.080: 96.8525% ( 281) 00:12:20.312 3.080 - 3.093: 98.1235% ( 254) 00:12:20.312 3.093 - 3.107: 98.8741% ( 150) 00:12:20.312 3.107 - 3.120: 99.2744% ( 80) 00:12:20.312 3.120 - 3.133: 99.4446% ( 34) 00:12:20.312 3.133 - 3.147: 99.5146% ( 14) 00:12:20.312 3.147 - 3.160: 99.6047% ( 18) 00:12:20.312 3.160 - 3.173: 99.6197% ( 3) 00:12:20.312 3.213 - 3.227: 99.6247% ( 1) 00:12:20.312 3.307 - 3.320: 99.6297% ( 1) 00:12:20.312 3.347 - 3.360: 99.6347% ( 1) 00:12:20.312 3.413 - 3.440: 99.6397% ( 1) 00:12:20.312 3.520 - 3.547: 99.6447% ( 1) 00:12:20.312 3.547 - 3.573: 99.6597% ( 3) 00:12:20.312 3.573 - 3.600: 99.6647% ( 1) 00:12:20.312 3.920 - 3.947: 99.6697% ( 1) 00:12:20.312 4.080 - 4.107: 99.6747% ( 1) 00:12:20.312 4.347 - 4.373: 99.6797% ( 1) 00:12:20.312 4.373 - 4.400: 99.6847% ( 1) 00:12:20.312 4.427 - 4.453: 99.6898% ( 1) 00:12:20.312 4.453 - 4.480: 99.6948% ( 1) 00:12:20.312 4.480 - 4.507: 99.7048% ( 2) 00:12:20.312 4.533 - 4.560: 99.7098% ( 1) 00:12:20.312 4.640 - 4.667: 99.7198% ( 2) 00:12:20.312 4.693 - 4.720: 99.7298% ( 2) 00:12:20.312 4.720 - 4.747: 99.7348% ( 1) 00:12:20.312 4.773 - 4.800: 99.7398% ( 1) 00:12:20.312 4.827 - 4.853: 99.7448% ( 1) 00:12:20.312 4.853 - 4.880: 99.7498% ( 1) 00:12:20.312 4.880 - 4.907: 99.7548% ( 1) 00:12:20.312 4.933 - 4.960: 99.7598% ( 1) 00:12:20.312 4.960 - 4.987: 99.7748% ( 3) 00:12:20.312 4.987 - 5.013: 99.7848% ( 2) 00:12:20.312 5.013 - 5.040: 99.7998% ( 3) 00:12:20.312 5.040 - 5.067: 99.8149% ( 3) 00:12:20.312 5.067 - 5.093: 99.8199% ( 1) 00:12:20.312 5.093 - 5.120: 99.8249% ( 1) 00:12:20.312 5.120 - 5.147: 99.8399% ( 3) 00:12:20.312 5.147 - 5.173: 99.8449% ( 1) 00:12:20.312 5.173 - 5.200: 99.8549% ( 2) 00:12:20.312 5.200 - 5.227: 99.8599% ( 1) 00:12:20.312 5.253 - 5.280: 99.8649% ( 1) 00:12:20.312 5.333 - 5.360: 99.8699% ( 1) 00:12:20.312 5.360 - 5.387: 99.8799% ( 2) 00:12:20.312 5.387 - 5.413: 99.8899% ( 2) 00:12:20.312 5.413 - 5.440: 99.8949% ( 1) 00:12:20.312 5.600 - 5.627: 99.8999% ( 1) 00:12:20.312 5.680 - 5.707: 99.9049% ( 1) 00:12:20.312 5.760 - 5.787: 99.9099% ( 1) 00:12:20.312 5.813 - 5.840: 99.9149% ( 1) 00:12:20.312 5.973 - 6.000: 99.9249% ( 2) 00:12:20.312 6.080 - 6.107: 99.9349% ( 2) 00:12:20.312 6.267 - 6.293: 99.9400% ( 1) 00:12:20.312 7.040 - 7.093: 99.9450% ( 1) 00:12:20.312 10.987 - 11.040: 99.9500% ( 1) 00:12:20.312 3986.773 - 4014.080: 100.0000% ( 10) 00:12:20.312 00:12:20.312 Complete histogram 00:12:20.312 ================== 00:12:20.312 Range in us Cumulative Count 00:12:20.312 1.667 - 1.673: 0.0500% ( 10) 00:12:20.312 1.673 - 1.680: 0.4053% ( 71) 00:12:20.312 1.680 - [2024-11-26 19:18:53.790124] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:20.312 1.687: 0.8707% ( 93) 00:12:20.312 1.687 - 1.693: 1.2510% ( 76) 00:12:20.312 1.693 - 1.700: 1.5312% ( 56) 00:12:20.312 1.700 - 1.707: 1.7114% ( 36) 00:12:20.312 1.707 - 1.720: 1.8315% ( 24) 00:12:20.312 1.720 - 1.733: 25.8307% ( 4796) 00:12:20.312 1.733 - 1.747: 64.7668% ( 7781) 00:12:20.312 1.747 - 1.760: 87.7602% ( 4595) 00:12:20.312 1.760 - 1.773: 96.4622% ( 1739) 00:12:20.312 1.773 - 1.787: 98.7040% ( 448) 00:12:20.312 1.787 - 1.800: 99.2244% ( 104) 00:12:20.312 1.800 - 1.813: 99.3395% ( 23) 00:12:20.312 1.813 - 1.827: 99.3795% ( 8) 00:12:20.312 1.827 - 1.840: 99.3945% ( 3) 00:12:20.312 1.853 - 1.867: 99.4045% ( 2) 00:12:20.312 1.867 - 1.880: 99.4095% ( 1) 00:12:20.312 1.880 - 1.893: 99.4145% ( 1) 00:12:20.312 1.893 - 1.907: 99.4195% ( 1) 00:12:20.312 1.947 - 1.960: 99.4245% ( 1) 00:12:20.312 3.173 - 3.187: 99.4295% ( 1) 00:12:20.312 3.280 - 3.293: 99.4345% ( 1) 00:12:20.312 3.293 - 3.307: 99.4396% ( 1) 00:12:20.312 3.307 - 3.320: 99.4446% ( 1) 00:12:20.312 3.387 - 3.400: 99.4496% ( 1) 00:12:20.312 3.440 - 3.467: 99.4546% ( 1) 00:12:20.312 3.653 - 3.680: 99.4596% ( 1) 00:12:20.312 3.680 - 3.707: 99.4646% ( 1) 00:12:20.312 3.840 - 3.867: 99.4696% ( 1) 00:12:20.312 3.893 - 3.920: 99.4796% ( 2) 00:12:20.312 3.920 - 3.947: 99.4846% ( 1) 00:12:20.312 3.947 - 3.973: 99.4946% ( 2) 00:12:20.312 3.973 - 4.000: 99.4996% ( 1) 00:12:20.312 4.000 - 4.027: 99.5046% ( 1) 00:12:20.312 4.027 - 4.053: 99.5096% ( 1) 00:12:20.312 4.160 - 4.187: 99.5146% ( 1) 00:12:20.312 4.213 - 4.240: 99.5196% ( 1) 00:12:20.312 4.267 - 4.293: 99.5246% ( 1) 00:12:20.312 4.320 - 4.347: 99.5296% ( 1) 00:12:20.312 4.453 - 4.480: 99.5346% ( 1) 00:12:20.312 4.533 - 4.560: 99.5396% ( 1) 00:12:20.312 4.640 - 4.667: 99.5446% ( 1) 00:12:20.312 4.800 - 4.827: 99.5496% ( 1) 00:12:20.312 4.853 - 4.880: 99.5546% ( 1) 00:12:20.312 4.880 - 4.907: 99.5596% ( 1) 00:12:20.312 4.907 - 4.933: 99.5647% ( 1) 00:12:20.312 4.987 - 5.013: 99.5697% ( 1) 00:12:20.312 5.067 - 5.093: 99.5747% ( 1) 00:12:20.312 5.867 - 5.893: 99.5797% ( 1) 00:12:20.312 6.293 - 6.320: 99.5847% ( 1) 00:12:20.312 8.800 - 8.853: 99.5897% ( 1) 00:12:20.312 33.920 - 34.133: 99.5947% ( 1) 00:12:20.312 130.560 - 131.413: 99.5997% ( 1) 00:12:20.312 3986.773 - 4014.080: 100.0000% ( 80) 00:12:20.312 00:12:20.312 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:20.312 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:20.312 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:20.312 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:20.312 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.312 [ 00:12:20.312 { 00:12:20.312 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.312 "subtype": "Discovery", 00:12:20.312 "listen_addresses": [], 00:12:20.312 "allow_any_host": true, 00:12:20.312 "hosts": [] 00:12:20.312 }, 00:12:20.312 { 00:12:20.312 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.312 "subtype": "NVMe", 00:12:20.312 "listen_addresses": [ 00:12:20.312 { 00:12:20.312 "trtype": "VFIOUSER", 00:12:20.312 "adrfam": "IPv4", 00:12:20.313 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.313 "trsvcid": "0" 00:12:20.313 } 00:12:20.313 ], 00:12:20.313 "allow_any_host": true, 00:12:20.313 "hosts": [], 00:12:20.313 "serial_number": "SPDK1", 00:12:20.313 "model_number": "SPDK bdev Controller", 00:12:20.313 "max_namespaces": 32, 00:12:20.313 "min_cntlid": 1, 00:12:20.313 "max_cntlid": 65519, 00:12:20.313 "namespaces": [ 00:12:20.313 { 00:12:20.313 "nsid": 1, 00:12:20.313 "bdev_name": "Malloc1", 00:12:20.313 "name": "Malloc1", 00:12:20.313 "nguid": "6824E3C177E74722A9530F6AA3094D1A", 00:12:20.313 "uuid": "6824e3c1-77e7-4722-a953-0f6aa3094d1a" 00:12:20.313 } 00:12:20.313 ] 00:12:20.313 }, 00:12:20.313 { 00:12:20.313 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.313 "subtype": "NVMe", 00:12:20.313 "listen_addresses": [ 00:12:20.313 { 00:12:20.313 "trtype": "VFIOUSER", 00:12:20.313 "adrfam": "IPv4", 00:12:20.313 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.313 "trsvcid": "0" 00:12:20.313 } 00:12:20.313 ], 00:12:20.313 "allow_any_host": true, 00:12:20.313 "hosts": [], 00:12:20.313 "serial_number": "SPDK2", 00:12:20.313 "model_number": "SPDK bdev Controller", 00:12:20.313 "max_namespaces": 32, 00:12:20.313 "min_cntlid": 1, 00:12:20.313 "max_cntlid": 65519, 00:12:20.313 "namespaces": [ 00:12:20.313 { 00:12:20.313 "nsid": 1, 00:12:20.313 "bdev_name": "Malloc2", 00:12:20.313 "name": "Malloc2", 00:12:20.313 "nguid": "C743DA07C50547E6BF9752DBC5D2E79C", 00:12:20.313 "uuid": "c743da07-c505-47e6-bf97-52dbc5d2e79c" 00:12:20.313 } 00:12:20.313 ] 00:12:20.313 } 00:12:20.313 ] 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3660231 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:20.313 19:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:20.313 [2024-11-26 19:18:54.137477] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:20.313 Malloc3 00:12:20.313 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:20.572 [2024-11-26 19:18:54.301603] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:20.572 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:20.572 Asynchronous Event Request test 00:12:20.572 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.572 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:20.572 Registering asynchronous event callbacks... 00:12:20.572 Starting namespace attribute notice tests for all controllers... 00:12:20.572 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:20.572 aer_cb - Changed Namespace 00:12:20.572 Cleaning up... 00:12:20.833 [ 00:12:20.833 { 00:12:20.833 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:20.833 "subtype": "Discovery", 00:12:20.833 "listen_addresses": [], 00:12:20.833 "allow_any_host": true, 00:12:20.833 "hosts": [] 00:12:20.833 }, 00:12:20.833 { 00:12:20.833 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:20.833 "subtype": "NVMe", 00:12:20.833 "listen_addresses": [ 00:12:20.833 { 00:12:20.833 "trtype": "VFIOUSER", 00:12:20.833 "adrfam": "IPv4", 00:12:20.833 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:20.833 "trsvcid": "0" 00:12:20.833 } 00:12:20.833 ], 00:12:20.833 "allow_any_host": true, 00:12:20.833 "hosts": [], 00:12:20.833 "serial_number": "SPDK1", 00:12:20.833 "model_number": "SPDK bdev Controller", 00:12:20.833 "max_namespaces": 32, 00:12:20.833 "min_cntlid": 1, 00:12:20.833 "max_cntlid": 65519, 00:12:20.833 "namespaces": [ 00:12:20.833 { 00:12:20.833 "nsid": 1, 00:12:20.833 "bdev_name": "Malloc1", 00:12:20.833 "name": "Malloc1", 00:12:20.833 "nguid": "6824E3C177E74722A9530F6AA3094D1A", 00:12:20.833 "uuid": "6824e3c1-77e7-4722-a953-0f6aa3094d1a" 00:12:20.833 }, 00:12:20.833 { 00:12:20.833 "nsid": 2, 00:12:20.833 "bdev_name": "Malloc3", 00:12:20.833 "name": "Malloc3", 00:12:20.833 "nguid": "64DB821A5CED41B099BA402075F3575C", 00:12:20.833 "uuid": "64db821a-5ced-41b0-99ba-402075f3575c" 00:12:20.833 } 00:12:20.833 ] 00:12:20.833 }, 00:12:20.833 { 00:12:20.833 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:20.833 "subtype": "NVMe", 00:12:20.833 "listen_addresses": [ 00:12:20.833 { 00:12:20.833 "trtype": "VFIOUSER", 00:12:20.833 "adrfam": "IPv4", 00:12:20.833 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:20.833 "trsvcid": "0" 00:12:20.833 } 00:12:20.833 ], 00:12:20.833 "allow_any_host": true, 00:12:20.833 "hosts": [], 00:12:20.833 "serial_number": "SPDK2", 00:12:20.833 "model_number": "SPDK bdev Controller", 00:12:20.833 "max_namespaces": 32, 00:12:20.833 "min_cntlid": 1, 00:12:20.833 "max_cntlid": 65519, 00:12:20.833 "namespaces": [ 00:12:20.833 { 00:12:20.833 "nsid": 1, 00:12:20.833 "bdev_name": "Malloc2", 00:12:20.833 "name": "Malloc2", 00:12:20.833 "nguid": "C743DA07C50547E6BF9752DBC5D2E79C", 00:12:20.833 "uuid": "c743da07-c505-47e6-bf97-52dbc5d2e79c" 00:12:20.833 } 00:12:20.833 ] 00:12:20.833 } 00:12:20.833 ] 00:12:20.833 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3660231 00:12:20.833 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:20.833 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:20.833 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:20.833 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:20.833 [2024-11-26 19:18:54.490703] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:12:20.833 [2024-11-26 19:18:54.490736] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3660489 ] 00:12:20.833 [2024-11-26 19:18:54.527321] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:20.833 [2024-11-26 19:18:54.532507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:20.833 [2024-11-26 19:18:54.532526] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0546fc7000 00:12:20.833 [2024-11-26 19:18:54.533513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.833 [2024-11-26 19:18:54.534518] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.833 [2024-11-26 19:18:54.535523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.833 [2024-11-26 19:18:54.536526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:20.833 [2024-11-26 19:18:54.537534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:20.833 [2024-11-26 19:18:54.538546] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.833 [2024-11-26 19:18:54.539554] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:20.833 [2024-11-26 19:18:54.540557] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:20.833 [2024-11-26 19:18:54.541567] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:20.833 [2024-11-26 19:18:54.541574] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0546fbc000 00:12:20.833 [2024-11-26 19:18:54.542486] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:20.833 [2024-11-26 19:18:54.551853] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:20.833 [2024-11-26 19:18:54.551871] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:20.833 [2024-11-26 19:18:54.556925] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:20.833 [2024-11-26 19:18:54.556961] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:20.833 [2024-11-26 19:18:54.557022] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:20.833 [2024-11-26 19:18:54.557033] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:20.833 [2024-11-26 19:18:54.557037] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:20.833 [2024-11-26 19:18:54.557931] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:20.833 [2024-11-26 19:18:54.557940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:20.833 [2024-11-26 19:18:54.557945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:20.833 [2024-11-26 19:18:54.558933] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:20.833 [2024-11-26 19:18:54.558940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:20.833 [2024-11-26 19:18:54.558946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:20.833 [2024-11-26 19:18:54.559941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:20.833 [2024-11-26 19:18:54.559948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:20.833 [2024-11-26 19:18:54.560951] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:20.833 [2024-11-26 19:18:54.560957] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:20.833 [2024-11-26 19:18:54.560961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:20.833 [2024-11-26 19:18:54.560965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:20.833 [2024-11-26 19:18:54.561071] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:20.833 [2024-11-26 19:18:54.561075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:20.833 [2024-11-26 19:18:54.561078] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:20.833 [2024-11-26 19:18:54.561961] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:20.833 [2024-11-26 19:18:54.562968] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:20.833 [2024-11-26 19:18:54.563981] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:20.833 [2024-11-26 19:18:54.564984] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:20.833 [2024-11-26 19:18:54.565012] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:20.834 [2024-11-26 19:18:54.565992] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:20.834 [2024-11-26 19:18:54.565999] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:20.834 [2024-11-26 19:18:54.566002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.566017] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:20.834 [2024-11-26 19:18:54.566022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.566035] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:20.834 [2024-11-26 19:18:54.566038] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:20.834 [2024-11-26 19:18:54.566041] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:20.834 [2024-11-26 19:18:54.566052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.574106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.574115] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:20.834 [2024-11-26 19:18:54.574119] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:20.834 [2024-11-26 19:18:54.574123] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:20.834 [2024-11-26 19:18:54.574126] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:20.834 [2024-11-26 19:18:54.574130] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:20.834 [2024-11-26 19:18:54.574133] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:20.834 [2024-11-26 19:18:54.574137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.574142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.574150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.582106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.582115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.834 [2024-11-26 19:18:54.582122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.834 [2024-11-26 19:18:54.582128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.834 [2024-11-26 19:18:54.582134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:20.834 [2024-11-26 19:18:54.582137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.582145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.582151] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.590105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.590111] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:20.834 [2024-11-26 19:18:54.590115] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.590122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.590126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.590133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.598103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.598151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.598157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.598162] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:20.834 [2024-11-26 19:18:54.598165] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:20.834 [2024-11-26 19:18:54.598168] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:20.834 [2024-11-26 19:18:54.598172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.606114] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:20.834 [2024-11-26 19:18:54.606122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.606128] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.606133] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:20.834 [2024-11-26 19:18:54.606136] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:20.834 [2024-11-26 19:18:54.606138] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:20.834 [2024-11-26 19:18:54.606143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.614105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.614113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.614119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.614124] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:20.834 [2024-11-26 19:18:54.614127] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:20.834 [2024-11-26 19:18:54.614130] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:20.834 [2024-11-26 19:18:54.614134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.622105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.622114] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.622119] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.622125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.622132] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.622135] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.622139] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.622143] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:20.834 [2024-11-26 19:18:54.622146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:20.834 [2024-11-26 19:18:54.622150] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:20.834 [2024-11-26 19:18:54.622162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.630105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.630115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.638104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.638114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.646104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.646113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:20.834 [2024-11-26 19:18:54.654105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:20.834 [2024-11-26 19:18:54.654116] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:20.834 [2024-11-26 19:18:54.654120] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:20.834 [2024-11-26 19:18:54.654122] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:20.834 [2024-11-26 19:18:54.654124] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:20.834 [2024-11-26 19:18:54.654127] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:20.834 [2024-11-26 19:18:54.654132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:20.834 [2024-11-26 19:18:54.654138] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:20.835 [2024-11-26 19:18:54.654141] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:20.835 [2024-11-26 19:18:54.654143] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:20.835 [2024-11-26 19:18:54.654147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:20.835 [2024-11-26 19:18:54.654152] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:20.835 [2024-11-26 19:18:54.654155] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:20.835 [2024-11-26 19:18:54.654158] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:20.835 [2024-11-26 19:18:54.654163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:20.835 [2024-11-26 19:18:54.654169] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:20.835 [2024-11-26 19:18:54.654172] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:20.835 [2024-11-26 19:18:54.654174] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:20.835 [2024-11-26 19:18:54.654178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:20.835 [2024-11-26 19:18:54.662105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:20.835 [2024-11-26 19:18:54.662115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:20.835 [2024-11-26 19:18:54.662123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:20.835 [2024-11-26 19:18:54.662128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:20.835 ===================================================== 00:12:20.835 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:20.835 ===================================================== 00:12:20.835 Controller Capabilities/Features 00:12:20.835 ================================ 00:12:20.835 Vendor ID: 4e58 00:12:20.835 Subsystem Vendor ID: 4e58 00:12:20.835 Serial Number: SPDK2 00:12:20.835 Model Number: SPDK bdev Controller 00:12:20.835 Firmware Version: 25.01 00:12:20.835 Recommended Arb Burst: 6 00:12:20.835 IEEE OUI Identifier: 8d 6b 50 00:12:20.835 Multi-path I/O 00:12:20.835 May have multiple subsystem ports: Yes 00:12:20.835 May have multiple controllers: Yes 00:12:20.835 Associated with SR-IOV VF: No 00:12:20.835 Max Data Transfer Size: 131072 00:12:20.835 Max Number of Namespaces: 32 00:12:20.835 Max Number of I/O Queues: 127 00:12:20.835 NVMe Specification Version (VS): 1.3 00:12:20.835 NVMe Specification Version (Identify): 1.3 00:12:20.835 Maximum Queue Entries: 256 00:12:20.835 Contiguous Queues Required: Yes 00:12:20.835 Arbitration Mechanisms Supported 00:12:20.835 Weighted Round Robin: Not Supported 00:12:20.835 Vendor Specific: Not Supported 00:12:20.835 Reset Timeout: 15000 ms 00:12:20.835 Doorbell Stride: 4 bytes 00:12:20.835 NVM Subsystem Reset: Not Supported 00:12:20.835 Command Sets Supported 00:12:20.835 NVM Command Set: Supported 00:12:20.835 Boot Partition: Not Supported 00:12:20.835 Memory Page Size Minimum: 4096 bytes 00:12:20.835 Memory Page Size Maximum: 4096 bytes 00:12:20.835 Persistent Memory Region: Not Supported 00:12:20.835 Optional Asynchronous Events Supported 00:12:20.835 Namespace Attribute Notices: Supported 00:12:20.835 Firmware Activation Notices: Not Supported 00:12:20.835 ANA Change Notices: Not Supported 00:12:20.835 PLE Aggregate Log Change Notices: Not Supported 00:12:20.835 LBA Status Info Alert Notices: Not Supported 00:12:20.835 EGE Aggregate Log Change Notices: Not Supported 00:12:20.835 Normal NVM Subsystem Shutdown event: Not Supported 00:12:20.835 Zone Descriptor Change Notices: Not Supported 00:12:20.835 Discovery Log Change Notices: Not Supported 00:12:20.835 Controller Attributes 00:12:20.835 128-bit Host Identifier: Supported 00:12:20.835 Non-Operational Permissive Mode: Not Supported 00:12:20.835 NVM Sets: Not Supported 00:12:20.835 Read Recovery Levels: Not Supported 00:12:20.835 Endurance Groups: Not Supported 00:12:20.835 Predictable Latency Mode: Not Supported 00:12:20.835 Traffic Based Keep ALive: Not Supported 00:12:20.835 Namespace Granularity: Not Supported 00:12:20.835 SQ Associations: Not Supported 00:12:20.835 UUID List: Not Supported 00:12:20.835 Multi-Domain Subsystem: Not Supported 00:12:20.835 Fixed Capacity Management: Not Supported 00:12:20.835 Variable Capacity Management: Not Supported 00:12:20.835 Delete Endurance Group: Not Supported 00:12:20.835 Delete NVM Set: Not Supported 00:12:20.835 Extended LBA Formats Supported: Not Supported 00:12:20.835 Flexible Data Placement Supported: Not Supported 00:12:20.835 00:12:20.835 Controller Memory Buffer Support 00:12:20.835 ================================ 00:12:20.835 Supported: No 00:12:20.835 00:12:20.835 Persistent Memory Region Support 00:12:20.835 ================================ 00:12:20.835 Supported: No 00:12:20.835 00:12:20.835 Admin Command Set Attributes 00:12:20.835 ============================ 00:12:20.835 Security Send/Receive: Not Supported 00:12:20.835 Format NVM: Not Supported 00:12:20.835 Firmware Activate/Download: Not Supported 00:12:20.835 Namespace Management: Not Supported 00:12:20.835 Device Self-Test: Not Supported 00:12:20.835 Directives: Not Supported 00:12:20.835 NVMe-MI: Not Supported 00:12:20.835 Virtualization Management: Not Supported 00:12:20.835 Doorbell Buffer Config: Not Supported 00:12:20.835 Get LBA Status Capability: Not Supported 00:12:20.835 Command & Feature Lockdown Capability: Not Supported 00:12:20.835 Abort Command Limit: 4 00:12:20.835 Async Event Request Limit: 4 00:12:20.835 Number of Firmware Slots: N/A 00:12:20.835 Firmware Slot 1 Read-Only: N/A 00:12:20.835 Firmware Activation Without Reset: N/A 00:12:20.835 Multiple Update Detection Support: N/A 00:12:20.835 Firmware Update Granularity: No Information Provided 00:12:20.835 Per-Namespace SMART Log: No 00:12:20.835 Asymmetric Namespace Access Log Page: Not Supported 00:12:20.835 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:20.835 Command Effects Log Page: Supported 00:12:20.835 Get Log Page Extended Data: Supported 00:12:20.835 Telemetry Log Pages: Not Supported 00:12:20.835 Persistent Event Log Pages: Not Supported 00:12:20.835 Supported Log Pages Log Page: May Support 00:12:20.835 Commands Supported & Effects Log Page: Not Supported 00:12:20.835 Feature Identifiers & Effects Log Page:May Support 00:12:20.835 NVMe-MI Commands & Effects Log Page: May Support 00:12:20.835 Data Area 4 for Telemetry Log: Not Supported 00:12:20.835 Error Log Page Entries Supported: 128 00:12:20.835 Keep Alive: Supported 00:12:20.835 Keep Alive Granularity: 10000 ms 00:12:20.835 00:12:20.835 NVM Command Set Attributes 00:12:20.835 ========================== 00:12:20.835 Submission Queue Entry Size 00:12:20.835 Max: 64 00:12:20.835 Min: 64 00:12:20.835 Completion Queue Entry Size 00:12:20.835 Max: 16 00:12:20.835 Min: 16 00:12:20.835 Number of Namespaces: 32 00:12:20.835 Compare Command: Supported 00:12:20.835 Write Uncorrectable Command: Not Supported 00:12:20.835 Dataset Management Command: Supported 00:12:20.835 Write Zeroes Command: Supported 00:12:20.835 Set Features Save Field: Not Supported 00:12:20.835 Reservations: Not Supported 00:12:20.835 Timestamp: Not Supported 00:12:20.835 Copy: Supported 00:12:20.835 Volatile Write Cache: Present 00:12:20.835 Atomic Write Unit (Normal): 1 00:12:20.835 Atomic Write Unit (PFail): 1 00:12:20.835 Atomic Compare & Write Unit: 1 00:12:20.835 Fused Compare & Write: Supported 00:12:20.835 Scatter-Gather List 00:12:20.835 SGL Command Set: Supported (Dword aligned) 00:12:20.835 SGL Keyed: Not Supported 00:12:20.835 SGL Bit Bucket Descriptor: Not Supported 00:12:20.835 SGL Metadata Pointer: Not Supported 00:12:20.835 Oversized SGL: Not Supported 00:12:20.835 SGL Metadata Address: Not Supported 00:12:20.835 SGL Offset: Not Supported 00:12:20.835 Transport SGL Data Block: Not Supported 00:12:20.835 Replay Protected Memory Block: Not Supported 00:12:20.835 00:12:20.835 Firmware Slot Information 00:12:20.835 ========================= 00:12:20.835 Active slot: 1 00:12:20.835 Slot 1 Firmware Revision: 25.01 00:12:20.835 00:12:20.835 00:12:20.835 Commands Supported and Effects 00:12:20.835 ============================== 00:12:20.835 Admin Commands 00:12:20.835 -------------- 00:12:20.835 Get Log Page (02h): Supported 00:12:20.835 Identify (06h): Supported 00:12:20.835 Abort (08h): Supported 00:12:20.835 Set Features (09h): Supported 00:12:20.835 Get Features (0Ah): Supported 00:12:20.835 Asynchronous Event Request (0Ch): Supported 00:12:20.835 Keep Alive (18h): Supported 00:12:20.835 I/O Commands 00:12:20.835 ------------ 00:12:20.835 Flush (00h): Supported LBA-Change 00:12:20.835 Write (01h): Supported LBA-Change 00:12:20.835 Read (02h): Supported 00:12:20.835 Compare (05h): Supported 00:12:20.835 Write Zeroes (08h): Supported LBA-Change 00:12:20.836 Dataset Management (09h): Supported LBA-Change 00:12:20.836 Copy (19h): Supported LBA-Change 00:12:20.836 00:12:20.836 Error Log 00:12:20.836 ========= 00:12:20.836 00:12:20.836 Arbitration 00:12:20.836 =========== 00:12:20.836 Arbitration Burst: 1 00:12:20.836 00:12:20.836 Power Management 00:12:20.836 ================ 00:12:20.836 Number of Power States: 1 00:12:20.836 Current Power State: Power State #0 00:12:20.836 Power State #0: 00:12:20.836 Max Power: 0.00 W 00:12:20.836 Non-Operational State: Operational 00:12:20.836 Entry Latency: Not Reported 00:12:20.836 Exit Latency: Not Reported 00:12:20.836 Relative Read Throughput: 0 00:12:20.836 Relative Read Latency: 0 00:12:20.836 Relative Write Throughput: 0 00:12:20.836 Relative Write Latency: 0 00:12:20.836 Idle Power: Not Reported 00:12:20.836 Active Power: Not Reported 00:12:20.836 Non-Operational Permissive Mode: Not Supported 00:12:20.836 00:12:20.836 Health Information 00:12:20.836 ================== 00:12:20.836 Critical Warnings: 00:12:20.836 Available Spare Space: OK 00:12:20.836 Temperature: OK 00:12:20.836 Device Reliability: OK 00:12:20.836 Read Only: No 00:12:20.836 Volatile Memory Backup: OK 00:12:20.836 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:20.836 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:20.836 Available Spare: 0% 00:12:20.836 Available Sp[2024-11-26 19:18:54.662199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:20.836 [2024-11-26 19:18:54.670105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:20.836 [2024-11-26 19:18:54.670129] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:20.836 [2024-11-26 19:18:54.670135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.836 [2024-11-26 19:18:54.670140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.836 [2024-11-26 19:18:54.670145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.836 [2024-11-26 19:18:54.670149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.836 [2024-11-26 19:18:54.670186] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:20.836 [2024-11-26 19:18:54.670194] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:20.836 [2024-11-26 19:18:54.671196] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:20.836 [2024-11-26 19:18:54.671231] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:20.836 [2024-11-26 19:18:54.671236] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:20.836 [2024-11-26 19:18:54.672203] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:20.836 [2024-11-26 19:18:54.672211] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:20.836 [2024-11-26 19:18:54.672252] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:20.836 [2024-11-26 19:18:54.673224] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:21.096 are Threshold: 0% 00:12:21.096 Life Percentage Used: 0% 00:12:21.096 Data Units Read: 0 00:12:21.096 Data Units Written: 0 00:12:21.096 Host Read Commands: 0 00:12:21.096 Host Write Commands: 0 00:12:21.096 Controller Busy Time: 0 minutes 00:12:21.096 Power Cycles: 0 00:12:21.096 Power On Hours: 0 hours 00:12:21.096 Unsafe Shutdowns: 0 00:12:21.096 Unrecoverable Media Errors: 0 00:12:21.096 Lifetime Error Log Entries: 0 00:12:21.096 Warning Temperature Time: 0 minutes 00:12:21.096 Critical Temperature Time: 0 minutes 00:12:21.096 00:12:21.096 Number of Queues 00:12:21.096 ================ 00:12:21.096 Number of I/O Submission Queues: 127 00:12:21.096 Number of I/O Completion Queues: 127 00:12:21.096 00:12:21.096 Active Namespaces 00:12:21.096 ================= 00:12:21.096 Namespace ID:1 00:12:21.096 Error Recovery Timeout: Unlimited 00:12:21.096 Command Set Identifier: NVM (00h) 00:12:21.096 Deallocate: Supported 00:12:21.096 Deallocated/Unwritten Error: Not Supported 00:12:21.096 Deallocated Read Value: Unknown 00:12:21.096 Deallocate in Write Zeroes: Not Supported 00:12:21.096 Deallocated Guard Field: 0xFFFF 00:12:21.096 Flush: Supported 00:12:21.096 Reservation: Supported 00:12:21.096 Namespace Sharing Capabilities: Multiple Controllers 00:12:21.096 Size (in LBAs): 131072 (0GiB) 00:12:21.096 Capacity (in LBAs): 131072 (0GiB) 00:12:21.096 Utilization (in LBAs): 131072 (0GiB) 00:12:21.096 NGUID: C743DA07C50547E6BF9752DBC5D2E79C 00:12:21.096 UUID: c743da07-c505-47e6-bf97-52dbc5d2e79c 00:12:21.096 Thin Provisioning: Not Supported 00:12:21.096 Per-NS Atomic Units: Yes 00:12:21.096 Atomic Boundary Size (Normal): 0 00:12:21.096 Atomic Boundary Size (PFail): 0 00:12:21.096 Atomic Boundary Offset: 0 00:12:21.096 Maximum Single Source Range Length: 65535 00:12:21.096 Maximum Copy Length: 65535 00:12:21.096 Maximum Source Range Count: 1 00:12:21.096 NGUID/EUI64 Never Reused: No 00:12:21.096 Namespace Write Protected: No 00:12:21.096 Number of LBA Formats: 1 00:12:21.096 Current LBA Format: LBA Format #00 00:12:21.096 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:21.096 00:12:21.096 19:18:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:21.096 [2024-11-26 19:18:54.843476] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:26.366 Initializing NVMe Controllers 00:12:26.366 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:26.366 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:26.366 Initialization complete. Launching workers. 00:12:26.366 ======================================================== 00:12:26.366 Latency(us) 00:12:26.366 Device Information : IOPS MiB/s Average min max 00:12:26.366 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40032.59 156.38 3197.71 862.46 6817.21 00:12:26.366 ======================================================== 00:12:26.366 Total : 40032.59 156.38 3197.71 862.46 6817.21 00:12:26.366 00:12:26.366 [2024-11-26 19:18:59.955320] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:26.366 19:18:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:26.366 [2024-11-26 19:19:00.135282] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:31.637 Initializing NVMe Controllers 00:12:31.637 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:31.637 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:31.637 Initialization complete. Launching workers. 00:12:31.637 ======================================================== 00:12:31.637 Latency(us) 00:12:31.637 Device Information : IOPS MiB/s Average min max 00:12:31.637 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39997.99 156.24 3200.11 858.07 7177.14 00:12:31.637 ======================================================== 00:12:31.637 Total : 39997.99 156.24 3200.11 858.07 7177.14 00:12:31.637 00:12:31.637 [2024-11-26 19:19:05.155691] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:31.637 19:19:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:31.637 [2024-11-26 19:19:05.343841] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:36.909 [2024-11-26 19:19:10.491185] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:36.909 Initializing NVMe Controllers 00:12:36.909 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:36.909 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:36.909 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:36.909 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:36.909 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:36.909 Initialization complete. Launching workers. 00:12:36.909 Starting thread on core 2 00:12:36.909 Starting thread on core 3 00:12:36.909 Starting thread on core 1 00:12:36.909 19:19:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:36.909 [2024-11-26 19:19:10.731520] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.198 [2024-11-26 19:19:13.785533] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.198 Initializing NVMe Controllers 00:12:40.198 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:40.198 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:40.198 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:40.198 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:40.198 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:40.198 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:40.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:40.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:40.198 Initialization complete. Launching workers. 00:12:40.198 Starting thread on core 1 with urgent priority queue 00:12:40.198 Starting thread on core 2 with urgent priority queue 00:12:40.198 Starting thread on core 3 with urgent priority queue 00:12:40.198 Starting thread on core 0 with urgent priority queue 00:12:40.198 SPDK bdev Controller (SPDK2 ) core 0: 12312.67 IO/s 8.12 secs/100000 ios 00:12:40.199 SPDK bdev Controller (SPDK2 ) core 1: 8720.33 IO/s 11.47 secs/100000 ios 00:12:40.199 SPDK bdev Controller (SPDK2 ) core 2: 8206.67 IO/s 12.19 secs/100000 ios 00:12:40.199 SPDK bdev Controller (SPDK2 ) core 3: 10708.00 IO/s 9.34 secs/100000 ios 00:12:40.199 ======================================================== 00:12:40.199 00:12:40.199 19:19:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:40.199 [2024-11-26 19:19:14.017486] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.199 Initializing NVMe Controllers 00:12:40.199 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:40.199 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:40.199 Namespace ID: 1 size: 0GB 00:12:40.199 Initialization complete. 00:12:40.199 INFO: using host memory buffer for IO 00:12:40.199 Hello world! 00:12:40.199 [2024-11-26 19:19:14.027538] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.457 19:19:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:40.457 [2024-11-26 19:19:14.253259] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:41.834 Initializing NVMe Controllers 00:12:41.834 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:41.834 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:41.834 Initialization complete. Launching workers. 00:12:41.834 submit (in ns) avg, min, max = 5746.0, 2838.3, 3997754.2 00:12:41.834 complete (in ns) avg, min, max = 17839.6, 1643.3, 6987772.5 00:12:41.834 00:12:41.834 Submit histogram 00:12:41.834 ================ 00:12:41.834 Range in us Cumulative Count 00:12:41.834 2.827 - 2.840: 0.0049% ( 1) 00:12:41.834 2.840 - 2.853: 0.2117% ( 42) 00:12:41.834 2.853 - 2.867: 1.4232% ( 246) 00:12:41.834 2.867 - 2.880: 3.9494% ( 513) 00:12:41.834 2.880 - 2.893: 7.3620% ( 693) 00:12:41.834 2.893 - 2.907: 11.1735% ( 774) 00:12:41.834 2.907 - 2.920: 15.6449% ( 908) 00:12:41.834 2.920 - 2.933: 21.2488% ( 1138) 00:12:41.834 2.933 - 2.947: 27.4339% ( 1256) 00:12:41.834 2.947 - 2.960: 34.1360% ( 1361) 00:12:41.834 2.960 - 2.973: 40.1487% ( 1221) 00:12:41.834 2.973 - 2.987: 46.8311% ( 1357) 00:12:41.834 2.987 - 3.000: 54.7742% ( 1613) 00:12:41.834 3.000 - 3.013: 64.3128% ( 1937) 00:12:41.834 3.013 - 3.027: 73.2801% ( 1821) 00:12:41.834 3.027 - 3.040: 80.7653% ( 1520) 00:12:41.834 3.040 - 3.053: 86.8863% ( 1243) 00:12:41.834 3.053 - 3.067: 92.0372% ( 1046) 00:12:41.834 3.067 - 3.080: 95.2627% ( 655) 00:12:41.834 3.080 - 3.093: 97.2670% ( 407) 00:12:41.834 3.093 - 3.107: 98.3996% ( 230) 00:12:41.834 3.107 - 3.120: 99.0939% ( 141) 00:12:41.834 3.120 - 3.133: 99.4140% ( 65) 00:12:41.834 3.133 - 3.147: 99.5322% ( 24) 00:12:41.835 3.147 - 3.160: 99.5667% ( 7) 00:12:41.835 3.160 - 3.173: 99.5765% ( 2) 00:12:41.835 3.173 - 3.187: 99.5962% ( 4) 00:12:41.835 3.227 - 3.240: 99.6011% ( 1) 00:12:41.835 3.240 - 3.253: 99.6110% ( 2) 00:12:41.835 3.320 - 3.333: 99.6159% ( 1) 00:12:41.835 3.413 - 3.440: 99.6208% ( 1) 00:12:41.835 3.440 - 3.467: 99.6257% ( 1) 00:12:41.835 3.520 - 3.547: 99.6307% ( 1) 00:12:41.835 3.680 - 3.707: 99.6356% ( 1) 00:12:41.835 3.867 - 3.893: 99.6405% ( 1) 00:12:41.835 4.053 - 4.080: 99.6454% ( 1) 00:12:41.835 4.080 - 4.107: 99.6553% ( 2) 00:12:41.835 4.160 - 4.187: 99.6602% ( 1) 00:12:41.835 4.347 - 4.373: 99.6651% ( 1) 00:12:41.835 4.453 - 4.480: 99.6750% ( 2) 00:12:41.835 4.480 - 4.507: 99.6799% ( 1) 00:12:41.835 4.533 - 4.560: 99.6848% ( 1) 00:12:41.835 4.587 - 4.613: 99.6947% ( 2) 00:12:41.835 4.613 - 4.640: 99.7045% ( 2) 00:12:41.835 4.640 - 4.667: 99.7144% ( 2) 00:12:41.835 4.720 - 4.747: 99.7193% ( 1) 00:12:41.835 4.773 - 4.800: 99.7341% ( 3) 00:12:41.835 4.827 - 4.853: 99.7390% ( 1) 00:12:41.835 4.880 - 4.907: 99.7489% ( 2) 00:12:41.835 4.933 - 4.960: 99.7538% ( 1) 00:12:41.835 4.960 - 4.987: 99.7686% ( 3) 00:12:41.835 4.987 - 5.013: 99.7784% ( 2) 00:12:41.835 5.013 - 5.040: 99.7883% ( 2) 00:12:41.835 5.040 - 5.067: 99.7932% ( 1) 00:12:41.835 5.067 - 5.093: 99.7981% ( 1) 00:12:41.835 5.093 - 5.120: 99.8030% ( 1) 00:12:41.835 5.147 - 5.173: 99.8079% ( 1) 00:12:41.835 5.173 - 5.200: 99.8129% ( 1) 00:12:41.835 5.253 - 5.280: 99.8178% ( 1) 00:12:41.835 5.280 - 5.307: 99.8227% ( 1) 00:12:41.835 5.493 - 5.520: 99.8276% ( 1) 00:12:41.835 5.573 - 5.600: 99.8326% ( 1) 00:12:41.835 5.680 - 5.707: 99.8375% ( 1) 00:12:41.835 5.733 - 5.760: 99.8424% ( 1) 00:12:41.835 5.787 - 5.813: 99.8523% ( 2) 00:12:41.835 5.813 - 5.840: 99.8572% ( 1) 00:12:41.835 5.867 - 5.893: 99.8621% ( 1) 00:12:41.835 5.920 - 5.947: 99.8670% ( 1) 00:12:41.835 5.973 - 6.000: 99.8720% ( 1) 00:12:41.835 6.053 - 6.080: 99.8769% ( 1) 00:12:41.835 6.080 - 6.107: 99.8867% ( 2) 00:12:41.835 6.187 - 6.213: 99.8917% ( 1) 00:12:41.835 6.347 - 6.373: 99.8966% ( 1) 00:12:41.835 6.373 - 6.400: 99.9015% ( 1) 00:12:41.835 6.427 - 6.453: 99.9064% ( 1) 00:12:41.835 6.453 - 6.480: 99.9114% ( 1) 00:12:41.835 6.747 - 6.773: 99.9163% ( 1) 00:12:41.835 7.040 - 7.093: 99.9212% ( 1) 00:12:41.835 8.587 - 8.640: 99.9261% ( 1) 00:12:41.835 [2024-11-26 19:19:15.347627] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:41.835 11.573 - 11.627: 99.9311% ( 1) 00:12:41.835 3986.773 - 4014.080: 100.0000% ( 14) 00:12:41.835 00:12:41.835 Complete histogram 00:12:41.835 ================== 00:12:41.835 Range in us Cumulative Count 00:12:41.835 1.640 - 1.647: 0.1625% ( 33) 00:12:41.835 1.647 - 1.653: 1.1129% ( 193) 00:12:41.835 1.653 - 1.660: 1.2360% ( 25) 00:12:41.835 1.660 - 1.667: 1.3197% ( 17) 00:12:41.835 1.667 - 1.673: 1.4379% ( 24) 00:12:41.835 1.673 - 1.680: 6.0324% ( 933) 00:12:41.835 1.680 - 1.687: 56.8425% ( 10318) 00:12:41.835 1.687 - 1.693: 60.2649% ( 695) 00:12:41.835 1.693 - 1.700: 68.7398% ( 1721) 00:12:41.835 1.700 - 1.707: 78.1947% ( 1920) 00:12:41.835 1.707 - 1.720: 82.8975% ( 955) 00:12:41.835 1.720 - 1.733: 84.0991% ( 244) 00:12:41.835 1.733 - 1.747: 87.2901% ( 648) 00:12:41.835 1.747 - 1.760: 91.9387% ( 944) 00:12:41.835 1.760 - 1.773: 96.4200% ( 910) 00:12:41.835 1.773 - 1.787: 98.5128% ( 425) 00:12:41.835 1.787 - 1.800: 99.1825% ( 136) 00:12:41.835 1.800 - 1.813: 99.3697% ( 38) 00:12:41.835 1.813 - 1.827: 99.4091% ( 8) 00:12:41.835 1.827 - 1.840: 99.4140% ( 1) 00:12:41.835 2.200 - 2.213: 99.4189% ( 1) 00:12:41.835 3.187 - 3.200: 99.4238% ( 1) 00:12:41.835 3.213 - 3.227: 99.4288% ( 1) 00:12:41.835 3.253 - 3.267: 99.4386% ( 2) 00:12:41.835 3.307 - 3.320: 99.4485% ( 2) 00:12:41.835 3.320 - 3.333: 99.4534% ( 1) 00:12:41.835 3.387 - 3.400: 99.4583% ( 1) 00:12:41.835 3.493 - 3.520: 99.4682% ( 2) 00:12:41.835 3.600 - 3.627: 99.4731% ( 1) 00:12:41.835 3.733 - 3.760: 99.4879% ( 3) 00:12:41.835 3.787 - 3.813: 99.4928% ( 1) 00:12:41.835 3.813 - 3.840: 99.4977% ( 1) 00:12:41.835 3.840 - 3.867: 99.5026% ( 1) 00:12:41.835 4.027 - 4.053: 99.5076% ( 1) 00:12:41.835 4.160 - 4.187: 99.5174% ( 2) 00:12:41.835 4.240 - 4.267: 99.5223% ( 1) 00:12:41.835 4.293 - 4.320: 99.5273% ( 1) 00:12:41.835 4.533 - 4.560: 99.5322% ( 1) 00:12:41.835 4.587 - 4.613: 99.5371% ( 1) 00:12:41.835 4.613 - 4.640: 99.5420% ( 1) 00:12:41.835 4.667 - 4.693: 99.5470% ( 1) 00:12:41.835 4.720 - 4.747: 99.5519% ( 1) 00:12:41.835 4.827 - 4.853: 99.5568% ( 1) 00:12:41.835 4.960 - 4.987: 99.5617% ( 1) 00:12:41.835 5.013 - 5.040: 99.5667% ( 1) 00:12:41.835 5.173 - 5.200: 99.5716% ( 1) 00:12:41.835 5.333 - 5.360: 99.5765% ( 1) 00:12:41.835 7.573 - 7.627: 99.5814% ( 1) 00:12:41.835 8.480 - 8.533: 99.5863% ( 1) 00:12:41.835 17.387 - 17.493: 99.5913% ( 1) 00:12:41.835 34.133 - 34.347: 99.5962% ( 1) 00:12:41.835 109.227 - 110.080: 99.6011% ( 1) 00:12:41.835 1037.653 - 1044.480: 99.6060% ( 1) 00:12:41.835 3986.773 - 4014.080: 99.9852% ( 77) 00:12:41.835 5980.160 - 6007.467: 99.9951% ( 2) 00:12:41.835 6963.200 - 6990.507: 100.0000% ( 1) 00:12:41.835 00:12:41.835 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:41.835 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:41.835 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:41.835 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:41.835 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:41.835 [ 00:12:41.835 { 00:12:41.835 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:41.835 "subtype": "Discovery", 00:12:41.835 "listen_addresses": [], 00:12:41.835 "allow_any_host": true, 00:12:41.835 "hosts": [] 00:12:41.835 }, 00:12:41.835 { 00:12:41.835 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:41.835 "subtype": "NVMe", 00:12:41.835 "listen_addresses": [ 00:12:41.835 { 00:12:41.835 "trtype": "VFIOUSER", 00:12:41.835 "adrfam": "IPv4", 00:12:41.835 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:41.835 "trsvcid": "0" 00:12:41.835 } 00:12:41.835 ], 00:12:41.835 "allow_any_host": true, 00:12:41.835 "hosts": [], 00:12:41.835 "serial_number": "SPDK1", 00:12:41.835 "model_number": "SPDK bdev Controller", 00:12:41.835 "max_namespaces": 32, 00:12:41.835 "min_cntlid": 1, 00:12:41.835 "max_cntlid": 65519, 00:12:41.835 "namespaces": [ 00:12:41.835 { 00:12:41.835 "nsid": 1, 00:12:41.835 "bdev_name": "Malloc1", 00:12:41.835 "name": "Malloc1", 00:12:41.835 "nguid": "6824E3C177E74722A9530F6AA3094D1A", 00:12:41.835 "uuid": "6824e3c1-77e7-4722-a953-0f6aa3094d1a" 00:12:41.835 }, 00:12:41.835 { 00:12:41.835 "nsid": 2, 00:12:41.835 "bdev_name": "Malloc3", 00:12:41.835 "name": "Malloc3", 00:12:41.835 "nguid": "64DB821A5CED41B099BA402075F3575C", 00:12:41.835 "uuid": "64db821a-5ced-41b0-99ba-402075f3575c" 00:12:41.835 } 00:12:41.835 ] 00:12:41.835 }, 00:12:41.835 { 00:12:41.835 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:41.835 "subtype": "NVMe", 00:12:41.835 "listen_addresses": [ 00:12:41.835 { 00:12:41.835 "trtype": "VFIOUSER", 00:12:41.835 "adrfam": "IPv4", 00:12:41.835 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:41.835 "trsvcid": "0" 00:12:41.835 } 00:12:41.835 ], 00:12:41.835 "allow_any_host": true, 00:12:41.835 "hosts": [], 00:12:41.835 "serial_number": "SPDK2", 00:12:41.835 "model_number": "SPDK bdev Controller", 00:12:41.835 "max_namespaces": 32, 00:12:41.835 "min_cntlid": 1, 00:12:41.835 "max_cntlid": 65519, 00:12:41.835 "namespaces": [ 00:12:41.835 { 00:12:41.835 "nsid": 1, 00:12:41.835 "bdev_name": "Malloc2", 00:12:41.835 "name": "Malloc2", 00:12:41.835 "nguid": "C743DA07C50547E6BF9752DBC5D2E79C", 00:12:41.835 "uuid": "c743da07-c505-47e6-bf97-52dbc5d2e79c" 00:12:41.835 } 00:12:41.835 ] 00:12:41.835 } 00:12:41.835 ] 00:12:41.835 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:41.835 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3665160 00:12:41.835 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:41.836 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:41.836 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:41.836 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:41.836 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:41.836 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:41.836 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:41.836 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:41.836 [2024-11-26 19:19:15.690434] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:42.095 Malloc4 00:12:42.095 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:42.095 [2024-11-26 19:19:15.869699] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:42.095 19:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:42.095 Asynchronous Event Request test 00:12:42.095 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:42.095 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:42.095 Registering asynchronous event callbacks... 00:12:42.095 Starting namespace attribute notice tests for all controllers... 00:12:42.095 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:42.095 aer_cb - Changed Namespace 00:12:42.095 Cleaning up... 00:12:42.354 [ 00:12:42.354 { 00:12:42.354 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:42.354 "subtype": "Discovery", 00:12:42.354 "listen_addresses": [], 00:12:42.354 "allow_any_host": true, 00:12:42.354 "hosts": [] 00:12:42.354 }, 00:12:42.354 { 00:12:42.354 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:42.354 "subtype": "NVMe", 00:12:42.354 "listen_addresses": [ 00:12:42.354 { 00:12:42.354 "trtype": "VFIOUSER", 00:12:42.354 "adrfam": "IPv4", 00:12:42.354 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:42.354 "trsvcid": "0" 00:12:42.354 } 00:12:42.354 ], 00:12:42.354 "allow_any_host": true, 00:12:42.354 "hosts": [], 00:12:42.354 "serial_number": "SPDK1", 00:12:42.354 "model_number": "SPDK bdev Controller", 00:12:42.354 "max_namespaces": 32, 00:12:42.354 "min_cntlid": 1, 00:12:42.354 "max_cntlid": 65519, 00:12:42.354 "namespaces": [ 00:12:42.354 { 00:12:42.354 "nsid": 1, 00:12:42.354 "bdev_name": "Malloc1", 00:12:42.354 "name": "Malloc1", 00:12:42.354 "nguid": "6824E3C177E74722A9530F6AA3094D1A", 00:12:42.354 "uuid": "6824e3c1-77e7-4722-a953-0f6aa3094d1a" 00:12:42.354 }, 00:12:42.354 { 00:12:42.354 "nsid": 2, 00:12:42.354 "bdev_name": "Malloc3", 00:12:42.354 "name": "Malloc3", 00:12:42.354 "nguid": "64DB821A5CED41B099BA402075F3575C", 00:12:42.354 "uuid": "64db821a-5ced-41b0-99ba-402075f3575c" 00:12:42.354 } 00:12:42.354 ] 00:12:42.354 }, 00:12:42.354 { 00:12:42.354 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:42.354 "subtype": "NVMe", 00:12:42.354 "listen_addresses": [ 00:12:42.354 { 00:12:42.354 "trtype": "VFIOUSER", 00:12:42.354 "adrfam": "IPv4", 00:12:42.354 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:42.354 "trsvcid": "0" 00:12:42.354 } 00:12:42.354 ], 00:12:42.354 "allow_any_host": true, 00:12:42.354 "hosts": [], 00:12:42.354 "serial_number": "SPDK2", 00:12:42.354 "model_number": "SPDK bdev Controller", 00:12:42.354 "max_namespaces": 32, 00:12:42.354 "min_cntlid": 1, 00:12:42.354 "max_cntlid": 65519, 00:12:42.354 "namespaces": [ 00:12:42.354 { 00:12:42.354 "nsid": 1, 00:12:42.355 "bdev_name": "Malloc2", 00:12:42.355 "name": "Malloc2", 00:12:42.355 "nguid": "C743DA07C50547E6BF9752DBC5D2E79C", 00:12:42.355 "uuid": "c743da07-c505-47e6-bf97-52dbc5d2e79c" 00:12:42.355 }, 00:12:42.355 { 00:12:42.355 "nsid": 2, 00:12:42.355 "bdev_name": "Malloc4", 00:12:42.355 "name": "Malloc4", 00:12:42.355 "nguid": "39CCAE5EBF5245228AE914381EF54095", 00:12:42.355 "uuid": "39ccae5e-bf52-4522-8ae9-14381ef54095" 00:12:42.355 } 00:12:42.355 ] 00:12:42.355 } 00:12:42.355 ] 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3665160 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3655126 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3655126 ']' 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3655126 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3655126 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3655126' 00:12:42.355 killing process with pid 3655126 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3655126 00:12:42.355 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3655126 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3665175 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3665175' 00:12:42.614 Process pid: 3665175 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3665175 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3665175 ']' 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:42.614 [2024-11-26 19:19:16.271711] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:42.614 [2024-11-26 19:19:16.272650] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:12:42.614 [2024-11-26 19:19:16.272692] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.614 [2024-11-26 19:19:16.339008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.614 [2024-11-26 19:19:16.368051] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.614 [2024-11-26 19:19:16.368081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.614 [2024-11-26 19:19:16.368087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.614 [2024-11-26 19:19:16.368092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.614 [2024-11-26 19:19:16.368096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.614 [2024-11-26 19:19:16.369430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.614 [2024-11-26 19:19:16.369581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.614 [2024-11-26 19:19:16.369730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.614 [2024-11-26 19:19:16.369732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.614 [2024-11-26 19:19:16.421412] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:42.614 [2024-11-26 19:19:16.422292] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:42.614 [2024-11-26 19:19:16.422586] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:42.614 [2024-11-26 19:19:16.422706] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:42.614 [2024-11-26 19:19:16.422713] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:42.614 19:19:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:43.993 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:43.993 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:43.993 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:43.993 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:43.993 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:43.993 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:43.993 Malloc1 00:12:43.993 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:44.252 19:19:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:44.252 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:44.512 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:44.512 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:44.512 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:44.772 Malloc2 00:12:44.772 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:44.772 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:45.030 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3665175 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3665175 ']' 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3665175 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665175 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665175' 00:12:45.289 killing process with pid 3665175 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3665175 00:12:45.289 19:19:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3665175 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:45.289 00:12:45.289 real 0m48.698s 00:12:45.289 user 3m8.953s 00:12:45.289 sys 0m2.258s 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:45.289 ************************************ 00:12:45.289 END TEST nvmf_vfio_user 00:12:45.289 ************************************ 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.289 ************************************ 00:12:45.289 START TEST nvmf_vfio_user_nvme_compliance 00:12:45.289 ************************************ 00:12:45.289 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:45.549 * Looking for test storage... 00:12:45.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.549 --rc genhtml_branch_coverage=1 00:12:45.549 --rc genhtml_function_coverage=1 00:12:45.549 --rc genhtml_legend=1 00:12:45.549 --rc geninfo_all_blocks=1 00:12:45.549 --rc geninfo_unexecuted_blocks=1 00:12:45.549 00:12:45.549 ' 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.549 --rc genhtml_branch_coverage=1 00:12:45.549 --rc genhtml_function_coverage=1 00:12:45.549 --rc genhtml_legend=1 00:12:45.549 --rc geninfo_all_blocks=1 00:12:45.549 --rc geninfo_unexecuted_blocks=1 00:12:45.549 00:12:45.549 ' 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.549 --rc genhtml_branch_coverage=1 00:12:45.549 --rc genhtml_function_coverage=1 00:12:45.549 --rc genhtml_legend=1 00:12:45.549 --rc geninfo_all_blocks=1 00:12:45.549 --rc geninfo_unexecuted_blocks=1 00:12:45.549 00:12:45.549 ' 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:45.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.549 --rc genhtml_branch_coverage=1 00:12:45.549 --rc genhtml_function_coverage=1 00:12:45.549 --rc genhtml_legend=1 00:12:45.549 --rc geninfo_all_blocks=1 00:12:45.549 --rc geninfo_unexecuted_blocks=1 00:12:45.549 00:12:45.549 ' 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:45.549 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3665922 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3665922' 00:12:45.550 Process pid: 3665922 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3665922 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3665922 ']' 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:45.550 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:45.550 [2024-11-26 19:19:19.316655] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:12:45.550 [2024-11-26 19:19:19.316707] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.550 [2024-11-26 19:19:19.381980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:45.550 [2024-11-26 19:19:19.411625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.550 [2024-11-26 19:19:19.411653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.550 [2024-11-26 19:19:19.411659] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.550 [2024-11-26 19:19:19.411664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.550 [2024-11-26 19:19:19.411668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.550 [2024-11-26 19:19:19.412848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.550 [2024-11-26 19:19:19.413011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.810 [2024-11-26 19:19:19.413013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.810 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.810 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:12:45.810 19:19:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.748 malloc0 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.748 19:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:47.008 00:12:47.008 00:12:47.008 CUnit - A unit testing framework for C - Version 2.1-3 00:12:47.008 http://cunit.sourceforge.net/ 00:12:47.008 00:12:47.008 00:12:47.008 Suite: nvme_compliance 00:12:47.008 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-26 19:19:20.706476] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.008 [2024-11-26 19:19:20.707788] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:47.008 [2024-11-26 19:19:20.707799] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:47.008 [2024-11-26 19:19:20.707804] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:47.008 [2024-11-26 19:19:20.709494] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.008 passed 00:12:47.008 Test: admin_identify_ctrlr_verify_fused ...[2024-11-26 19:19:20.787204] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.008 [2024-11-26 19:19:20.790222] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.008 passed 00:12:47.008 Test: admin_identify_ns ...[2024-11-26 19:19:20.870085] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.267 [2024-11-26 19:19:20.934112] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:47.267 [2024-11-26 19:19:20.942109] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:47.267 [2024-11-26 19:19:20.963188] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.267 passed 00:12:47.267 Test: admin_get_features_mandatory_features ...[2024-11-26 19:19:21.039226] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.267 [2024-11-26 19:19:21.042244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.267 passed 00:12:47.267 Test: admin_get_features_optional_features ...[2024-11-26 19:19:21.120701] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.267 [2024-11-26 19:19:21.123720] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.527 passed 00:12:47.527 Test: admin_set_features_number_of_queues ...[2024-11-26 19:19:21.198444] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.527 [2024-11-26 19:19:21.304197] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.527 passed 00:12:47.527 Test: admin_get_log_page_mandatory_logs ...[2024-11-26 19:19:21.379413] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.527 [2024-11-26 19:19:21.382430] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.787 passed 00:12:47.787 Test: admin_get_log_page_with_lpo ...[2024-11-26 19:19:21.458439] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.787 [2024-11-26 19:19:21.526111] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:47.787 [2024-11-26 19:19:21.539155] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.787 passed 00:12:47.787 Test: fabric_property_get ...[2024-11-26 19:19:21.615245] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:47.787 [2024-11-26 19:19:21.616451] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:47.787 [2024-11-26 19:19:21.618262] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:47.787 passed 00:12:48.047 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-26 19:19:21.695750] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.047 [2024-11-26 19:19:21.696957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:48.047 [2024-11-26 19:19:21.698779] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.047 passed 00:12:48.047 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-26 19:19:21.773494] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.047 [2024-11-26 19:19:21.855109] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:48.047 [2024-11-26 19:19:21.872105] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:48.047 [2024-11-26 19:19:21.877185] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.047 passed 00:12:48.307 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-26 19:19:21.950366] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.307 [2024-11-26 19:19:21.951562] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:48.307 [2024-11-26 19:19:21.953379] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.307 passed 00:12:48.307 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-26 19:19:22.028450] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.307 [2024-11-26 19:19:22.108107] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:48.307 [2024-11-26 19:19:22.132104] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:48.307 [2024-11-26 19:19:22.137167] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.307 passed 00:12:48.566 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-26 19:19:22.209328] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.566 [2024-11-26 19:19:22.210528] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:48.566 [2024-11-26 19:19:22.210544] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:48.566 [2024-11-26 19:19:22.212354] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.566 passed 00:12:48.566 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-26 19:19:22.287450] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.566 [2024-11-26 19:19:22.383106] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:48.566 [2024-11-26 19:19:22.391106] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:48.566 [2024-11-26 19:19:22.399106] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:48.566 [2024-11-26 19:19:22.407106] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:48.826 [2024-11-26 19:19:22.432180] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.826 passed 00:12:48.826 Test: admin_create_io_sq_verify_pc ...[2024-11-26 19:19:22.507250] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:48.826 [2024-11-26 19:19:22.524111] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:48.826 [2024-11-26 19:19:22.541401] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:48.826 passed 00:12:48.826 Test: admin_create_io_qp_max_qps ...[2024-11-26 19:19:22.619873] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.207 [2024-11-26 19:19:23.725108] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:12:50.466 [2024-11-26 19:19:24.105135] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.466 passed 00:12:50.466 Test: admin_create_io_sq_shared_cq ...[2024-11-26 19:19:24.182464] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:50.466 [2024-11-26 19:19:24.315111] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:50.725 [2024-11-26 19:19:24.352153] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:50.725 passed 00:12:50.725 00:12:50.725 Run Summary: Type Total Ran Passed Failed Inactive 00:12:50.725 suites 1 1 n/a 0 0 00:12:50.725 tests 18 18 18 0 0 00:12:50.725 asserts 360 360 360 0 n/a 00:12:50.725 00:12:50.725 Elapsed time = 1.496 seconds 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3665922 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3665922 ']' 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3665922 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3665922 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3665922' 00:12:50.725 killing process with pid 3665922 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3665922 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3665922 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:12:50.725 00:12:50.725 real 0m5.419s 00:12:50.725 user 0m15.411s 00:12:50.725 sys 0m0.429s 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:50.725 ************************************ 00:12:50.725 END TEST nvmf_vfio_user_nvme_compliance 00:12:50.725 ************************************ 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.725 19:19:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.985 ************************************ 00:12:50.986 START TEST nvmf_vfio_user_fuzz 00:12:50.986 ************************************ 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:12:50.986 * Looking for test storage... 00:12:50.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.986 --rc genhtml_branch_coverage=1 00:12:50.986 --rc genhtml_function_coverage=1 00:12:50.986 --rc genhtml_legend=1 00:12:50.986 --rc geninfo_all_blocks=1 00:12:50.986 --rc geninfo_unexecuted_blocks=1 00:12:50.986 00:12:50.986 ' 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.986 --rc genhtml_branch_coverage=1 00:12:50.986 --rc genhtml_function_coverage=1 00:12:50.986 --rc genhtml_legend=1 00:12:50.986 --rc geninfo_all_blocks=1 00:12:50.986 --rc geninfo_unexecuted_blocks=1 00:12:50.986 00:12:50.986 ' 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.986 --rc genhtml_branch_coverage=1 00:12:50.986 --rc genhtml_function_coverage=1 00:12:50.986 --rc genhtml_legend=1 00:12:50.986 --rc geninfo_all_blocks=1 00:12:50.986 --rc geninfo_unexecuted_blocks=1 00:12:50.986 00:12:50.986 ' 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:50.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.986 --rc genhtml_branch_coverage=1 00:12:50.986 --rc genhtml_function_coverage=1 00:12:50.986 --rc genhtml_legend=1 00:12:50.986 --rc geninfo_all_blocks=1 00:12:50.986 --rc geninfo_unexecuted_blocks=1 00:12:50.986 00:12:50.986 ' 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.986 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3667236 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3667236' 00:12:50.987 Process pid: 3667236 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3667236 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3667236 ']' 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:50.987 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:51.247 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.247 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:12:51.247 19:19:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 malloc0 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:12:52.186 19:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:24.283 Fuzzing completed. Shutting down the fuzz application 00:13:24.283 00:13:24.283 Dumping successful admin opcodes: 00:13:24.283 9, 10, 00:13:24.283 Dumping successful io opcodes: 00:13:24.283 0, 00:13:24.283 NS: 0x20000081ef00 I/O qp, Total commands completed: 1293256, total successful commands: 5072, random_seed: 3170141248 00:13:24.283 NS: 0x20000081ef00 admin qp, Total commands completed: 308043, total successful commands: 76, random_seed: 2776779136 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3667236 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3667236 ']' 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3667236 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3667236 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3667236' 00:13:24.283 killing process with pid 3667236 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3667236 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3667236 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:24.283 00:13:24.283 real 0m31.954s 00:13:24.283 user 0m33.250s 00:13:24.283 sys 0m26.351s 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:24.283 ************************************ 00:13:24.283 END TEST nvmf_vfio_user_fuzz 00:13:24.283 ************************************ 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:24.283 ************************************ 00:13:24.283 START TEST nvmf_auth_target 00:13:24.283 ************************************ 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:24.283 * Looking for test storage... 00:13:24.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:24.283 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:24.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.284 --rc genhtml_branch_coverage=1 00:13:24.284 --rc genhtml_function_coverage=1 00:13:24.284 --rc genhtml_legend=1 00:13:24.284 --rc geninfo_all_blocks=1 00:13:24.284 --rc geninfo_unexecuted_blocks=1 00:13:24.284 00:13:24.284 ' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:24.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.284 --rc genhtml_branch_coverage=1 00:13:24.284 --rc genhtml_function_coverage=1 00:13:24.284 --rc genhtml_legend=1 00:13:24.284 --rc geninfo_all_blocks=1 00:13:24.284 --rc geninfo_unexecuted_blocks=1 00:13:24.284 00:13:24.284 ' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:24.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.284 --rc genhtml_branch_coverage=1 00:13:24.284 --rc genhtml_function_coverage=1 00:13:24.284 --rc genhtml_legend=1 00:13:24.284 --rc geninfo_all_blocks=1 00:13:24.284 --rc geninfo_unexecuted_blocks=1 00:13:24.284 00:13:24.284 ' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:24.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:24.284 --rc genhtml_branch_coverage=1 00:13:24.284 --rc genhtml_function_coverage=1 00:13:24.284 --rc genhtml_legend=1 00:13:24.284 --rc geninfo_all_blocks=1 00:13:24.284 --rc geninfo_unexecuted_blocks=1 00:13:24.284 00:13:24.284 ' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:24.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:24.284 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:13:24.285 19:19:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:28.600 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:28.600 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:28.600 Found net devices under 0000:31:00.0: cvl_0_0 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:28.600 Found net devices under 0000:31:00.1: cvl_0_1 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.600 19:20:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:28.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.571 ms 00:13:28.600 00:13:28.600 --- 10.0.0.2 ping statistics --- 00:13:28.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.600 rtt min/avg/max/mdev = 0.571/0.571/0.571/0.000 ms 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:13:28.600 00:13:28.600 --- 10.0.0.1 ping statistics --- 00:13:28.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.600 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3677962 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3677962 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3677962 ']' 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3677982 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=05c8f405984b2d36c77667dd0a7e7818e5814855317febc2 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.BuV 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 05c8f405984b2d36c77667dd0a7e7818e5814855317febc2 0 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 05c8f405984b2d36c77667dd0a7e7818e5814855317febc2 0 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=05c8f405984b2d36c77667dd0a7e7818e5814855317febc2 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:28.600 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.BuV 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.BuV 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.BuV 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=77807d8b0d4cf11d49e28dc8ba35124944e4291984808d0109e4587a7c11995b 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.yvz 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 77807d8b0d4cf11d49e28dc8ba35124944e4291984808d0109e4587a7c11995b 3 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 77807d8b0d4cf11d49e28dc8ba35124944e4291984808d0109e4587a7c11995b 3 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=77807d8b0d4cf11d49e28dc8ba35124944e4291984808d0109e4587a7c11995b 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.yvz 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.yvz 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.yvz 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=002c95fcb47475befa49e85f2173ce11 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Gum 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 002c95fcb47475befa49e85f2173ce11 1 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 002c95fcb47475befa49e85f2173ce11 1 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=002c95fcb47475befa49e85f2173ce11 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Gum 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Gum 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Gum 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:28.601 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=35614aad2f544b7a8e23a85f1c6e0ef6403d27ff01a48309 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.qKe 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 35614aad2f544b7a8e23a85f1c6e0ef6403d27ff01a48309 2 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 35614aad2f544b7a8e23a85f1c6e0ef6403d27ff01a48309 2 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=35614aad2f544b7a8e23a85f1c6e0ef6403d27ff01a48309 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.qKe 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.qKe 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.qKe 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4fb4b77cbf2bd1242e6d9634ce287af35338c56214257154 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.KK1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4fb4b77cbf2bd1242e6d9634ce287af35338c56214257154 2 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4fb4b77cbf2bd1242e6d9634ce287af35338c56214257154 2 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4fb4b77cbf2bd1242e6d9634ce287af35338c56214257154 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.KK1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.KK1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.KK1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8b2a201bbd89ff396e029454ff4c7ee8 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.oyL 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8b2a201bbd89ff396e029454ff4c7ee8 1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8b2a201bbd89ff396e029454ff4c7ee8 1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8b2a201bbd89ff396e029454ff4c7ee8 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.oyL 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.oyL 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.oyL 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1393aac8343fb13e4a87363a0836a00e3a773c80b2ebc4b787eb8f22355dad7a 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6jM 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1393aac8343fb13e4a87363a0836a00e3a773c80b2ebc4b787eb8f22355dad7a 3 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1393aac8343fb13e4a87363a0836a00e3a773c80b2ebc4b787eb8f22355dad7a 3 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1393aac8343fb13e4a87363a0836a00e3a773c80b2ebc4b787eb8f22355dad7a 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6jM 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6jM 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.6jM 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3677962 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3677962 ']' 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.860 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.120 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.120 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:29.120 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3677982 /var/tmp/host.sock 00:13:29.120 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3677982 ']' 00:13:29.120 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:29.120 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:29.120 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:29.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:29.121 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:29.121 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.121 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.121 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:29.121 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:29.121 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.121 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.418 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.418 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:29.418 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BuV 00:13:29.418 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.418 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.418 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.418 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.BuV 00:13:29.418 19:20:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.BuV 00:13:29.418 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.yvz ]] 00:13:29.418 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yvz 00:13:29.418 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.418 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.418 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.418 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yvz 00:13:29.418 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yvz 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Gum 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Gum 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Gum 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.qKe ]] 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qKe 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qKe 00:13:29.677 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qKe 00:13:29.936 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:29.936 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KK1 00:13:29.936 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.936 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.936 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.936 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KK1 00:13:29.936 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KK1 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.oyL ]] 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oyL 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oyL 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oyL 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6jM 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.6jM 00:13:30.193 19:20:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.6jM 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.452 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.710 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.710 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.710 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.710 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.710 00:13:30.710 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.710 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.710 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.969 { 00:13:30.969 "cntlid": 1, 00:13:30.969 "qid": 0, 00:13:30.969 "state": "enabled", 00:13:30.969 "thread": "nvmf_tgt_poll_group_000", 00:13:30.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:30.969 "listen_address": { 00:13:30.969 "trtype": "TCP", 00:13:30.969 "adrfam": "IPv4", 00:13:30.969 "traddr": "10.0.0.2", 00:13:30.969 "trsvcid": "4420" 00:13:30.969 }, 00:13:30.969 "peer_address": { 00:13:30.969 "trtype": "TCP", 00:13:30.969 "adrfam": "IPv4", 00:13:30.969 "traddr": "10.0.0.1", 00:13:30.969 "trsvcid": "59214" 00:13:30.969 }, 00:13:30.969 "auth": { 00:13:30.969 "state": "completed", 00:13:30.969 "digest": "sha256", 00:13:30.969 "dhgroup": "null" 00:13:30.969 } 00:13:30.969 } 00:13:30.969 ]' 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.969 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.228 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:31.228 19:20:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:31.796 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.796 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:31.796 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.796 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.796 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.796 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.796 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:31.796 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.055 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.313 00:13:32.313 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.313 19:20:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.313 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.313 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.313 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.313 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.313 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.313 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.313 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.313 { 00:13:32.313 "cntlid": 3, 00:13:32.313 "qid": 0, 00:13:32.313 "state": "enabled", 00:13:32.313 "thread": "nvmf_tgt_poll_group_000", 00:13:32.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:32.313 "listen_address": { 00:13:32.313 "trtype": "TCP", 00:13:32.313 "adrfam": "IPv4", 00:13:32.313 "traddr": "10.0.0.2", 00:13:32.313 "trsvcid": "4420" 00:13:32.313 }, 00:13:32.313 "peer_address": { 00:13:32.313 "trtype": "TCP", 00:13:32.313 "adrfam": "IPv4", 00:13:32.313 "traddr": "10.0.0.1", 00:13:32.313 "trsvcid": "49924" 00:13:32.313 }, 00:13:32.313 "auth": { 00:13:32.313 "state": "completed", 00:13:32.313 "digest": "sha256", 00:13:32.313 "dhgroup": "null" 00:13:32.313 } 00:13:32.313 } 00:13:32.313 ]' 00:13:32.313 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:32.572 19:20:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.509 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.510 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.510 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.510 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.510 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.510 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.510 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:33.770 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.770 { 00:13:33.770 "cntlid": 5, 00:13:33.770 "qid": 0, 00:13:33.770 "state": "enabled", 00:13:33.770 "thread": "nvmf_tgt_poll_group_000", 00:13:33.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:33.770 "listen_address": { 00:13:33.770 "trtype": "TCP", 00:13:33.770 "adrfam": "IPv4", 00:13:33.770 "traddr": "10.0.0.2", 00:13:33.770 "trsvcid": "4420" 00:13:33.770 }, 00:13:33.770 "peer_address": { 00:13:33.770 "trtype": "TCP", 00:13:33.770 "adrfam": "IPv4", 00:13:33.770 "traddr": "10.0.0.1", 00:13:33.770 "trsvcid": "49956" 00:13:33.770 }, 00:13:33.770 "auth": { 00:13:33.770 "state": "completed", 00:13:33.770 "digest": "sha256", 00:13:33.770 "dhgroup": "null" 00:13:33.770 } 00:13:33.770 } 00:13:33.770 ]' 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:33.770 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.029 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:34.029 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.029 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.029 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.029 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.029 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:34.029 19:20:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:34.598 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:34.858 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:35.117 00:13:35.117 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.117 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.117 19:20:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.376 { 00:13:35.376 "cntlid": 7, 00:13:35.376 "qid": 0, 00:13:35.376 "state": "enabled", 00:13:35.376 "thread": "nvmf_tgt_poll_group_000", 00:13:35.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:35.376 "listen_address": { 00:13:35.376 "trtype": "TCP", 00:13:35.376 "adrfam": "IPv4", 00:13:35.376 "traddr": "10.0.0.2", 00:13:35.376 "trsvcid": "4420" 00:13:35.376 }, 00:13:35.376 "peer_address": { 00:13:35.376 "trtype": "TCP", 00:13:35.376 "adrfam": "IPv4", 00:13:35.376 "traddr": "10.0.0.1", 00:13:35.376 "trsvcid": "49988" 00:13:35.376 }, 00:13:35.376 "auth": { 00:13:35.376 "state": "completed", 00:13:35.376 "digest": "sha256", 00:13:35.376 "dhgroup": "null" 00:13:35.376 } 00:13:35.376 } 00:13:35.376 ]' 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.376 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.634 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:35.634 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:36.202 19:20:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.461 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:36.461 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.721 { 00:13:36.721 "cntlid": 9, 00:13:36.721 "qid": 0, 00:13:36.721 "state": "enabled", 00:13:36.721 "thread": "nvmf_tgt_poll_group_000", 00:13:36.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:36.721 "listen_address": { 00:13:36.721 "trtype": "TCP", 00:13:36.721 "adrfam": "IPv4", 00:13:36.721 "traddr": "10.0.0.2", 00:13:36.721 "trsvcid": "4420" 00:13:36.721 }, 00:13:36.721 "peer_address": { 00:13:36.721 "trtype": "TCP", 00:13:36.721 "adrfam": "IPv4", 00:13:36.721 "traddr": "10.0.0.1", 00:13:36.721 "trsvcid": "50020" 00:13:36.721 }, 00:13:36.721 "auth": { 00:13:36.721 "state": "completed", 00:13:36.721 "digest": "sha256", 00:13:36.721 "dhgroup": "ffdhe2048" 00:13:36.721 } 00:13:36.721 } 00:13:36.721 ]' 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:36.721 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.980 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.980 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.980 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.980 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:36.980 19:20:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:37.548 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.548 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:37.548 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.548 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.548 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.548 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.548 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:37.548 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.807 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:38.066 00:13:38.066 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.066 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.066 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.326 { 00:13:38.326 "cntlid": 11, 00:13:38.326 "qid": 0, 00:13:38.326 "state": "enabled", 00:13:38.326 "thread": "nvmf_tgt_poll_group_000", 00:13:38.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:38.326 "listen_address": { 00:13:38.326 "trtype": "TCP", 00:13:38.326 "adrfam": "IPv4", 00:13:38.326 "traddr": "10.0.0.2", 00:13:38.326 "trsvcid": "4420" 00:13:38.326 }, 00:13:38.326 "peer_address": { 00:13:38.326 "trtype": "TCP", 00:13:38.326 "adrfam": "IPv4", 00:13:38.326 "traddr": "10.0.0.1", 00:13:38.326 "trsvcid": "50040" 00:13:38.326 }, 00:13:38.326 "auth": { 00:13:38.326 "state": "completed", 00:13:38.326 "digest": "sha256", 00:13:38.326 "dhgroup": "ffdhe2048" 00:13:38.326 } 00:13:38.326 } 00:13:38.326 ]' 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:38.326 19:20:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.326 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:38.326 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.326 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.326 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.326 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.584 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:38.584 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.152 19:20:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.411 00:13:39.411 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.411 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.411 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.670 { 00:13:39.670 "cntlid": 13, 00:13:39.670 "qid": 0, 00:13:39.670 "state": "enabled", 00:13:39.670 "thread": "nvmf_tgt_poll_group_000", 00:13:39.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:39.670 "listen_address": { 00:13:39.670 "trtype": "TCP", 00:13:39.670 "adrfam": "IPv4", 00:13:39.670 "traddr": "10.0.0.2", 00:13:39.670 "trsvcid": "4420" 00:13:39.670 }, 00:13:39.670 "peer_address": { 00:13:39.670 "trtype": "TCP", 00:13:39.670 "adrfam": "IPv4", 00:13:39.670 "traddr": "10.0.0.1", 00:13:39.670 "trsvcid": "50072" 00:13:39.670 }, 00:13:39.670 "auth": { 00:13:39.670 "state": "completed", 00:13:39.670 "digest": "sha256", 00:13:39.670 "dhgroup": "ffdhe2048" 00:13:39.670 } 00:13:39.670 } 00:13:39.670 ]' 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.670 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.929 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:39.929 19:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:40.497 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.497 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:40.497 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.497 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.497 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.497 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.497 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:40.497 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:40.756 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.016 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.016 { 00:13:41.016 "cntlid": 15, 00:13:41.016 "qid": 0, 00:13:41.016 "state": "enabled", 00:13:41.016 "thread": "nvmf_tgt_poll_group_000", 00:13:41.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:41.016 "listen_address": { 00:13:41.016 "trtype": "TCP", 00:13:41.016 "adrfam": "IPv4", 00:13:41.016 "traddr": "10.0.0.2", 00:13:41.016 "trsvcid": "4420" 00:13:41.016 }, 00:13:41.016 "peer_address": { 00:13:41.016 "trtype": "TCP", 00:13:41.016 "adrfam": "IPv4", 00:13:41.016 "traddr": "10.0.0.1", 00:13:41.016 "trsvcid": "50106" 00:13:41.016 }, 00:13:41.016 "auth": { 00:13:41.016 "state": "completed", 00:13:41.016 "digest": "sha256", 00:13:41.016 "dhgroup": "ffdhe2048" 00:13:41.016 } 00:13:41.016 } 00:13:41.016 ]' 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.016 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.275 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:41.275 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.275 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.275 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.275 19:20:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.275 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:41.275 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:41.842 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.102 19:20:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:42.360 00:13:42.360 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.360 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.360 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.619 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.619 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.619 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.619 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.619 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.619 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.619 { 00:13:42.619 "cntlid": 17, 00:13:42.619 "qid": 0, 00:13:42.619 "state": "enabled", 00:13:42.619 "thread": "nvmf_tgt_poll_group_000", 00:13:42.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:42.619 "listen_address": { 00:13:42.619 "trtype": "TCP", 00:13:42.619 "adrfam": "IPv4", 00:13:42.619 "traddr": "10.0.0.2", 00:13:42.619 "trsvcid": "4420" 00:13:42.619 }, 00:13:42.619 "peer_address": { 00:13:42.619 "trtype": "TCP", 00:13:42.619 "adrfam": "IPv4", 00:13:42.619 "traddr": "10.0.0.1", 00:13:42.619 "trsvcid": "48006" 00:13:42.619 }, 00:13:42.619 "auth": { 00:13:42.619 "state": "completed", 00:13:42.619 "digest": "sha256", 00:13:42.619 "dhgroup": "ffdhe3072" 00:13:42.619 } 00:13:42.619 } 00:13:42.619 ]' 00:13:42.619 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.619 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:42.620 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.620 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:42.620 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.620 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.620 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.620 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.878 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:42.878 19:20:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.446 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.705 00:13:43.705 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.705 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.705 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.963 { 00:13:43.963 "cntlid": 19, 00:13:43.963 "qid": 0, 00:13:43.963 "state": "enabled", 00:13:43.963 "thread": "nvmf_tgt_poll_group_000", 00:13:43.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:43.963 "listen_address": { 00:13:43.963 "trtype": "TCP", 00:13:43.963 "adrfam": "IPv4", 00:13:43.963 "traddr": "10.0.0.2", 00:13:43.963 "trsvcid": "4420" 00:13:43.963 }, 00:13:43.963 "peer_address": { 00:13:43.963 "trtype": "TCP", 00:13:43.963 "adrfam": "IPv4", 00:13:43.963 "traddr": "10.0.0.1", 00:13:43.963 "trsvcid": "48024" 00:13:43.963 }, 00:13:43.963 "auth": { 00:13:43.963 "state": "completed", 00:13:43.963 "digest": "sha256", 00:13:43.963 "dhgroup": "ffdhe3072" 00:13:43.963 } 00:13:43.963 } 00:13:43.963 ]' 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.963 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.222 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:44.222 19:20:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:44.790 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.790 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:44.790 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.790 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.790 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.790 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.790 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:44.790 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.051 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:45.051 19:20:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.311 { 00:13:45.311 "cntlid": 21, 00:13:45.311 "qid": 0, 00:13:45.311 "state": "enabled", 00:13:45.311 "thread": "nvmf_tgt_poll_group_000", 00:13:45.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:45.311 "listen_address": { 00:13:45.311 "trtype": "TCP", 00:13:45.311 "adrfam": "IPv4", 00:13:45.311 "traddr": "10.0.0.2", 00:13:45.311 "trsvcid": "4420" 00:13:45.311 }, 00:13:45.311 "peer_address": { 00:13:45.311 "trtype": "TCP", 00:13:45.311 "adrfam": "IPv4", 00:13:45.311 "traddr": "10.0.0.1", 00:13:45.311 "trsvcid": "48042" 00:13:45.311 }, 00:13:45.311 "auth": { 00:13:45.311 "state": "completed", 00:13:45.311 "digest": "sha256", 00:13:45.311 "dhgroup": "ffdhe3072" 00:13:45.311 } 00:13:45.311 } 00:13:45.311 ]' 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.311 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.569 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:45.569 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:46.136 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.136 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:46.136 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.136 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.136 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.136 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.136 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:46.136 19:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.395 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.654 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.654 { 00:13:46.654 "cntlid": 23, 00:13:46.654 "qid": 0, 00:13:46.654 "state": "enabled", 00:13:46.654 "thread": "nvmf_tgt_poll_group_000", 00:13:46.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:46.654 "listen_address": { 00:13:46.654 "trtype": "TCP", 00:13:46.654 "adrfam": "IPv4", 00:13:46.654 "traddr": "10.0.0.2", 00:13:46.654 "trsvcid": "4420" 00:13:46.654 }, 00:13:46.654 "peer_address": { 00:13:46.654 "trtype": "TCP", 00:13:46.654 "adrfam": "IPv4", 00:13:46.654 "traddr": "10.0.0.1", 00:13:46.654 "trsvcid": "48058" 00:13:46.654 }, 00:13:46.654 "auth": { 00:13:46.654 "state": "completed", 00:13:46.654 "digest": "sha256", 00:13:46.654 "dhgroup": "ffdhe3072" 00:13:46.654 } 00:13:46.654 } 00:13:46.654 ]' 00:13:46.654 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:46.914 19:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:47.483 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.742 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.001 00:13:48.001 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.001 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.001 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.259 { 00:13:48.259 "cntlid": 25, 00:13:48.259 "qid": 0, 00:13:48.259 "state": "enabled", 00:13:48.259 "thread": "nvmf_tgt_poll_group_000", 00:13:48.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:48.259 "listen_address": { 00:13:48.259 "trtype": "TCP", 00:13:48.259 "adrfam": "IPv4", 00:13:48.259 "traddr": "10.0.0.2", 00:13:48.259 "trsvcid": "4420" 00:13:48.259 }, 00:13:48.259 "peer_address": { 00:13:48.259 "trtype": "TCP", 00:13:48.259 "adrfam": "IPv4", 00:13:48.259 "traddr": "10.0.0.1", 00:13:48.259 "trsvcid": "48088" 00:13:48.259 }, 00:13:48.259 "auth": { 00:13:48.259 "state": "completed", 00:13:48.259 "digest": "sha256", 00:13:48.259 "dhgroup": "ffdhe4096" 00:13:48.259 } 00:13:48.259 } 00:13:48.259 ]' 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.259 19:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:48.259 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.259 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.259 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.259 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.518 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:48.518 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:49.086 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.086 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:49.086 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.086 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.086 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.086 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.086 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:49.086 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.345 19:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.345 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.605 { 00:13:49.605 "cntlid": 27, 00:13:49.605 "qid": 0, 00:13:49.605 "state": "enabled", 00:13:49.605 "thread": "nvmf_tgt_poll_group_000", 00:13:49.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:49.605 "listen_address": { 00:13:49.605 "trtype": "TCP", 00:13:49.605 "adrfam": "IPv4", 00:13:49.605 "traddr": "10.0.0.2", 00:13:49.605 "trsvcid": "4420" 00:13:49.605 }, 00:13:49.605 "peer_address": { 00:13:49.605 "trtype": "TCP", 00:13:49.605 "adrfam": "IPv4", 00:13:49.605 "traddr": "10.0.0.1", 00:13:49.605 "trsvcid": "48114" 00:13:49.605 }, 00:13:49.605 "auth": { 00:13:49.605 "state": "completed", 00:13:49.605 "digest": "sha256", 00:13:49.605 "dhgroup": "ffdhe4096" 00:13:49.605 } 00:13:49.605 } 00:13:49.605 ]' 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.605 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.606 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.606 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:49.606 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.864 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.864 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.864 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.864 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:49.864 19:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:50.431 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.431 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:50.431 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.431 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.431 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.431 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.431 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:50.431 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.689 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:50.947 00:13:50.947 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.947 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.947 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.206 { 00:13:51.206 "cntlid": 29, 00:13:51.206 "qid": 0, 00:13:51.206 "state": "enabled", 00:13:51.206 "thread": "nvmf_tgt_poll_group_000", 00:13:51.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:51.206 "listen_address": { 00:13:51.206 "trtype": "TCP", 00:13:51.206 "adrfam": "IPv4", 00:13:51.206 "traddr": "10.0.0.2", 00:13:51.206 "trsvcid": "4420" 00:13:51.206 }, 00:13:51.206 "peer_address": { 00:13:51.206 "trtype": "TCP", 00:13:51.206 "adrfam": "IPv4", 00:13:51.206 "traddr": "10.0.0.1", 00:13:51.206 "trsvcid": "48132" 00:13:51.206 }, 00:13:51.206 "auth": { 00:13:51.206 "state": "completed", 00:13:51.206 "digest": "sha256", 00:13:51.206 "dhgroup": "ffdhe4096" 00:13:51.206 } 00:13:51.206 } 00:13:51.206 ]' 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.206 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.207 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.207 19:20:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.465 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:51.465 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.033 19:20:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.293 00:13:52.293 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.293 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.293 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.552 { 00:13:52.552 "cntlid": 31, 00:13:52.552 "qid": 0, 00:13:52.552 "state": "enabled", 00:13:52.552 "thread": "nvmf_tgt_poll_group_000", 00:13:52.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:52.552 "listen_address": { 00:13:52.552 "trtype": "TCP", 00:13:52.552 "adrfam": "IPv4", 00:13:52.552 "traddr": "10.0.0.2", 00:13:52.552 "trsvcid": "4420" 00:13:52.552 }, 00:13:52.552 "peer_address": { 00:13:52.552 "trtype": "TCP", 00:13:52.552 "adrfam": "IPv4", 00:13:52.552 "traddr": "10.0.0.1", 00:13:52.552 "trsvcid": "51022" 00:13:52.552 }, 00:13:52.552 "auth": { 00:13:52.552 "state": "completed", 00:13:52.552 "digest": "sha256", 00:13:52.552 "dhgroup": "ffdhe4096" 00:13:52.552 } 00:13:52.552 } 00:13:52.552 ]' 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.552 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.811 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:52.811 19:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:53.379 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.638 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:53.897 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.897 { 00:13:53.897 "cntlid": 33, 00:13:53.897 "qid": 0, 00:13:53.897 "state": "enabled", 00:13:53.897 "thread": "nvmf_tgt_poll_group_000", 00:13:53.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:53.897 "listen_address": { 00:13:53.897 "trtype": "TCP", 00:13:53.897 "adrfam": "IPv4", 00:13:53.897 "traddr": "10.0.0.2", 00:13:53.897 "trsvcid": "4420" 00:13:53.897 }, 00:13:53.897 "peer_address": { 00:13:53.897 "trtype": "TCP", 00:13:53.897 "adrfam": "IPv4", 00:13:53.897 "traddr": "10.0.0.1", 00:13:53.897 "trsvcid": "51044" 00:13:53.897 }, 00:13:53.897 "auth": { 00:13:53.897 "state": "completed", 00:13:53.897 "digest": "sha256", 00:13:53.897 "dhgroup": "ffdhe6144" 00:13:53.897 } 00:13:53.897 } 00:13:53.897 ]' 00:13:53.897 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:54.156 19:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:13:54.724 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.724 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:54.724 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.724 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.724 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.724 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.724 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.724 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.983 19:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:55.243 00:13:55.243 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.243 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.243 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.500 { 00:13:55.500 "cntlid": 35, 00:13:55.500 "qid": 0, 00:13:55.500 "state": "enabled", 00:13:55.500 "thread": "nvmf_tgt_poll_group_000", 00:13:55.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:55.500 "listen_address": { 00:13:55.500 "trtype": "TCP", 00:13:55.500 "adrfam": "IPv4", 00:13:55.500 "traddr": "10.0.0.2", 00:13:55.500 "trsvcid": "4420" 00:13:55.500 }, 00:13:55.500 "peer_address": { 00:13:55.500 "trtype": "TCP", 00:13:55.500 "adrfam": "IPv4", 00:13:55.500 "traddr": "10.0.0.1", 00:13:55.500 "trsvcid": "51082" 00:13:55.500 }, 00:13:55.500 "auth": { 00:13:55.500 "state": "completed", 00:13:55.500 "digest": "sha256", 00:13:55.500 "dhgroup": "ffdhe6144" 00:13:55.500 } 00:13:55.500 } 00:13:55.500 ]' 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.500 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.759 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:55.759 19:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:13:56.326 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.326 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:56.326 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.326 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.326 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.326 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.326 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:56.326 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.585 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.845 00:13:56.845 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.845 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.845 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.154 { 00:13:57.154 "cntlid": 37, 00:13:57.154 "qid": 0, 00:13:57.154 "state": "enabled", 00:13:57.154 "thread": "nvmf_tgt_poll_group_000", 00:13:57.154 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:57.154 "listen_address": { 00:13:57.154 "trtype": "TCP", 00:13:57.154 "adrfam": "IPv4", 00:13:57.154 "traddr": "10.0.0.2", 00:13:57.154 "trsvcid": "4420" 00:13:57.154 }, 00:13:57.154 "peer_address": { 00:13:57.154 "trtype": "TCP", 00:13:57.154 "adrfam": "IPv4", 00:13:57.154 "traddr": "10.0.0.1", 00:13:57.154 "trsvcid": "51126" 00:13:57.154 }, 00:13:57.154 "auth": { 00:13:57.154 "state": "completed", 00:13:57.154 "digest": "sha256", 00:13:57.154 "dhgroup": "ffdhe6144" 00:13:57.154 } 00:13:57.154 } 00:13:57.154 ]' 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.154 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.155 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:57.155 19:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:13:57.776 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.776 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:57.776 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.776 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.776 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.776 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.777 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:57.777 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.035 19:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:58.294 00:13:58.294 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.294 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.294 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.554 { 00:13:58.554 "cntlid": 39, 00:13:58.554 "qid": 0, 00:13:58.554 "state": "enabled", 00:13:58.554 "thread": "nvmf_tgt_poll_group_000", 00:13:58.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:13:58.554 "listen_address": { 00:13:58.554 "trtype": "TCP", 00:13:58.554 "adrfam": "IPv4", 00:13:58.554 "traddr": "10.0.0.2", 00:13:58.554 "trsvcid": "4420" 00:13:58.554 }, 00:13:58.554 "peer_address": { 00:13:58.554 "trtype": "TCP", 00:13:58.554 "adrfam": "IPv4", 00:13:58.554 "traddr": "10.0.0.1", 00:13:58.554 "trsvcid": "51158" 00:13:58.554 }, 00:13:58.554 "auth": { 00:13:58.554 "state": "completed", 00:13:58.554 "digest": "sha256", 00:13:58.554 "dhgroup": "ffdhe6144" 00:13:58.554 } 00:13:58.554 } 00:13:58.554 ]' 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.554 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.813 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:58.813 19:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.381 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:59.948 00:13:59.948 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.948 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.948 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.207 { 00:14:00.207 "cntlid": 41, 00:14:00.207 "qid": 0, 00:14:00.207 "state": "enabled", 00:14:00.207 "thread": "nvmf_tgt_poll_group_000", 00:14:00.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:00.207 "listen_address": { 00:14:00.207 "trtype": "TCP", 00:14:00.207 "adrfam": "IPv4", 00:14:00.207 "traddr": "10.0.0.2", 00:14:00.207 "trsvcid": "4420" 00:14:00.207 }, 00:14:00.207 "peer_address": { 00:14:00.207 "trtype": "TCP", 00:14:00.207 "adrfam": "IPv4", 00:14:00.207 "traddr": "10.0.0.1", 00:14:00.207 "trsvcid": "51200" 00:14:00.207 }, 00:14:00.207 "auth": { 00:14:00.207 "state": "completed", 00:14:00.207 "digest": "sha256", 00:14:00.207 "dhgroup": "ffdhe8192" 00:14:00.207 } 00:14:00.207 } 00:14:00.207 ]' 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.207 19:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.467 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:00.467 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.034 19:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.601 00:14:01.601 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.601 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.601 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:01.860 { 00:14:01.860 "cntlid": 43, 00:14:01.860 "qid": 0, 00:14:01.860 "state": "enabled", 00:14:01.860 "thread": "nvmf_tgt_poll_group_000", 00:14:01.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:01.860 "listen_address": { 00:14:01.860 "trtype": "TCP", 00:14:01.860 "adrfam": "IPv4", 00:14:01.860 "traddr": "10.0.0.2", 00:14:01.860 "trsvcid": "4420" 00:14:01.860 }, 00:14:01.860 "peer_address": { 00:14:01.860 "trtype": "TCP", 00:14:01.860 "adrfam": "IPv4", 00:14:01.860 "traddr": "10.0.0.1", 00:14:01.860 "trsvcid": "51226" 00:14:01.860 }, 00:14:01.860 "auth": { 00:14:01.860 "state": "completed", 00:14:01.860 "digest": "sha256", 00:14:01.860 "dhgroup": "ffdhe8192" 00:14:01.860 } 00:14:01.860 } 00:14:01.860 ]' 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.860 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.119 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:02.119 19:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.687 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.254 00:14:03.254 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.254 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.254 19:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:03.513 { 00:14:03.513 "cntlid": 45, 00:14:03.513 "qid": 0, 00:14:03.513 "state": "enabled", 00:14:03.513 "thread": "nvmf_tgt_poll_group_000", 00:14:03.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:03.513 "listen_address": { 00:14:03.513 "trtype": "TCP", 00:14:03.513 "adrfam": "IPv4", 00:14:03.513 "traddr": "10.0.0.2", 00:14:03.513 "trsvcid": "4420" 00:14:03.513 }, 00:14:03.513 "peer_address": { 00:14:03.513 "trtype": "TCP", 00:14:03.513 "adrfam": "IPv4", 00:14:03.513 "traddr": "10.0.0.1", 00:14:03.513 "trsvcid": "35160" 00:14:03.513 }, 00:14:03.513 "auth": { 00:14:03.513 "state": "completed", 00:14:03.513 "digest": "sha256", 00:14:03.513 "dhgroup": "ffdhe8192" 00:14:03.513 } 00:14:03.513 } 00:14:03.513 ]' 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.513 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.772 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:03.772 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:04.339 19:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.339 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.905 00:14:04.905 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.905 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.905 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:05.163 { 00:14:05.163 "cntlid": 47, 00:14:05.163 "qid": 0, 00:14:05.163 "state": "enabled", 00:14:05.163 "thread": "nvmf_tgt_poll_group_000", 00:14:05.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:05.163 "listen_address": { 00:14:05.163 "trtype": "TCP", 00:14:05.163 "adrfam": "IPv4", 00:14:05.163 "traddr": "10.0.0.2", 00:14:05.163 "trsvcid": "4420" 00:14:05.163 }, 00:14:05.163 "peer_address": { 00:14:05.163 "trtype": "TCP", 00:14:05.163 "adrfam": "IPv4", 00:14:05.163 "traddr": "10.0.0.1", 00:14:05.163 "trsvcid": "35206" 00:14:05.163 }, 00:14:05.163 "auth": { 00:14:05.163 "state": "completed", 00:14:05.163 "digest": "sha256", 00:14:05.163 "dhgroup": "ffdhe8192" 00:14:05.163 } 00:14:05.163 } 00:14:05.163 ]' 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.163 19:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.422 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:05.422 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:05.989 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.248 19:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.248 00:14:06.248 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.248 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.248 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.507 { 00:14:06.507 "cntlid": 49, 00:14:06.507 "qid": 0, 00:14:06.507 "state": "enabled", 00:14:06.507 "thread": "nvmf_tgt_poll_group_000", 00:14:06.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:06.507 "listen_address": { 00:14:06.507 "trtype": "TCP", 00:14:06.507 "adrfam": "IPv4", 00:14:06.507 "traddr": "10.0.0.2", 00:14:06.507 "trsvcid": "4420" 00:14:06.507 }, 00:14:06.507 "peer_address": { 00:14:06.507 "trtype": "TCP", 00:14:06.507 "adrfam": "IPv4", 00:14:06.507 "traddr": "10.0.0.1", 00:14:06.507 "trsvcid": "35238" 00:14:06.507 }, 00:14:06.507 "auth": { 00:14:06.507 "state": "completed", 00:14:06.507 "digest": "sha384", 00:14:06.507 "dhgroup": "null" 00:14:06.507 } 00:14:06.507 } 00:14:06.507 ]' 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.507 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.766 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:06.766 19:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:07.332 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.332 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:07.332 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.332 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.332 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.332 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.332 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:07.332 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.591 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:07.849 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.849 { 00:14:07.849 "cntlid": 51, 00:14:07.849 "qid": 0, 00:14:07.849 "state": "enabled", 00:14:07.849 "thread": "nvmf_tgt_poll_group_000", 00:14:07.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:07.849 "listen_address": { 00:14:07.849 "trtype": "TCP", 00:14:07.849 "adrfam": "IPv4", 00:14:07.849 "traddr": "10.0.0.2", 00:14:07.849 "trsvcid": "4420" 00:14:07.849 }, 00:14:07.849 "peer_address": { 00:14:07.849 "trtype": "TCP", 00:14:07.849 "adrfam": "IPv4", 00:14:07.849 "traddr": "10.0.0.1", 00:14:07.849 "trsvcid": "35276" 00:14:07.849 }, 00:14:07.849 "auth": { 00:14:07.849 "state": "completed", 00:14:07.849 "digest": "sha384", 00:14:07.849 "dhgroup": "null" 00:14:07.849 } 00:14:07.849 } 00:14:07.849 ]' 00:14:07.849 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:08.108 19:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.043 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:09.303 00:14:09.303 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.303 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.303 19:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.303 { 00:14:09.303 "cntlid": 53, 00:14:09.303 "qid": 0, 00:14:09.303 "state": "enabled", 00:14:09.303 "thread": "nvmf_tgt_poll_group_000", 00:14:09.303 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:09.303 "listen_address": { 00:14:09.303 "trtype": "TCP", 00:14:09.303 "adrfam": "IPv4", 00:14:09.303 "traddr": "10.0.0.2", 00:14:09.303 "trsvcid": "4420" 00:14:09.303 }, 00:14:09.303 "peer_address": { 00:14:09.303 "trtype": "TCP", 00:14:09.303 "adrfam": "IPv4", 00:14:09.303 "traddr": "10.0.0.1", 00:14:09.303 "trsvcid": "35300" 00:14:09.303 }, 00:14:09.303 "auth": { 00:14:09.303 "state": "completed", 00:14:09.303 "digest": "sha384", 00:14:09.303 "dhgroup": "null" 00:14:09.303 } 00:14:09.303 } 00:14:09.303 ]' 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.303 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.562 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:09.562 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.562 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.562 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.562 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.562 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:09.562 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:10.132 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.132 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:10.132 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.132 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.132 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.132 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.132 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:10.132 19:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.390 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:10.648 00:14:10.648 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.648 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.648 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:10.907 { 00:14:10.907 "cntlid": 55, 00:14:10.907 "qid": 0, 00:14:10.907 "state": "enabled", 00:14:10.907 "thread": "nvmf_tgt_poll_group_000", 00:14:10.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:10.907 "listen_address": { 00:14:10.907 "trtype": "TCP", 00:14:10.907 "adrfam": "IPv4", 00:14:10.907 "traddr": "10.0.0.2", 00:14:10.907 "trsvcid": "4420" 00:14:10.907 }, 00:14:10.907 "peer_address": { 00:14:10.907 "trtype": "TCP", 00:14:10.907 "adrfam": "IPv4", 00:14:10.907 "traddr": "10.0.0.1", 00:14:10.907 "trsvcid": "35328" 00:14:10.907 }, 00:14:10.907 "auth": { 00:14:10.907 "state": "completed", 00:14:10.907 "digest": "sha384", 00:14:10.907 "dhgroup": "null" 00:14:10.907 } 00:14:10.907 } 00:14:10.907 ]' 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.907 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.166 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:11.166 19:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.734 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.993 00:14:11.993 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.993 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.993 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.251 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.251 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.251 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.251 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.251 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.251 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.251 { 00:14:12.251 "cntlid": 57, 00:14:12.251 "qid": 0, 00:14:12.251 "state": "enabled", 00:14:12.251 "thread": "nvmf_tgt_poll_group_000", 00:14:12.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:12.251 "listen_address": { 00:14:12.251 "trtype": "TCP", 00:14:12.251 "adrfam": "IPv4", 00:14:12.251 "traddr": "10.0.0.2", 00:14:12.251 "trsvcid": "4420" 00:14:12.251 }, 00:14:12.251 "peer_address": { 00:14:12.251 "trtype": "TCP", 00:14:12.251 "adrfam": "IPv4", 00:14:12.251 "traddr": "10.0.0.1", 00:14:12.251 "trsvcid": "44554" 00:14:12.251 }, 00:14:12.251 "auth": { 00:14:12.251 "state": "completed", 00:14:12.251 "digest": "sha384", 00:14:12.251 "dhgroup": "ffdhe2048" 00:14:12.251 } 00:14:12.251 } 00:14:12.251 ]' 00:14:12.251 19:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.251 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:12.251 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:12.252 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:12.252 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:12.252 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.252 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.252 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.510 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:12.510 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:13.078 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.078 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:13.078 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.078 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.078 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.078 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.078 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:13.078 19:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.337 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.596 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.596 { 00:14:13.596 "cntlid": 59, 00:14:13.596 "qid": 0, 00:14:13.596 "state": "enabled", 00:14:13.596 "thread": "nvmf_tgt_poll_group_000", 00:14:13.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:13.596 "listen_address": { 00:14:13.596 "trtype": "TCP", 00:14:13.596 "adrfam": "IPv4", 00:14:13.596 "traddr": "10.0.0.2", 00:14:13.596 "trsvcid": "4420" 00:14:13.596 }, 00:14:13.596 "peer_address": { 00:14:13.596 "trtype": "TCP", 00:14:13.596 "adrfam": "IPv4", 00:14:13.596 "traddr": "10.0.0.1", 00:14:13.596 "trsvcid": "44584" 00:14:13.596 }, 00:14:13.596 "auth": { 00:14:13.596 "state": "completed", 00:14:13.596 "digest": "sha384", 00:14:13.596 "dhgroup": "ffdhe2048" 00:14:13.596 } 00:14:13.596 } 00:14:13.596 ]' 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.596 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.855 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.855 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:13.855 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.855 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.855 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.855 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.855 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:13.855 19:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:14.424 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.424 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:14.424 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.424 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.424 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.424 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.424 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:14.424 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.682 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.941 00:14:14.941 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.941 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.941 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.941 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.941 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.941 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.941 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.199 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.199 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.199 { 00:14:15.199 "cntlid": 61, 00:14:15.199 "qid": 0, 00:14:15.199 "state": "enabled", 00:14:15.199 "thread": "nvmf_tgt_poll_group_000", 00:14:15.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:15.199 "listen_address": { 00:14:15.199 "trtype": "TCP", 00:14:15.199 "adrfam": "IPv4", 00:14:15.199 "traddr": "10.0.0.2", 00:14:15.199 "trsvcid": "4420" 00:14:15.199 }, 00:14:15.199 "peer_address": { 00:14:15.199 "trtype": "TCP", 00:14:15.199 "adrfam": "IPv4", 00:14:15.199 "traddr": "10.0.0.1", 00:14:15.199 "trsvcid": "44620" 00:14:15.199 }, 00:14:15.199 "auth": { 00:14:15.199 "state": "completed", 00:14:15.199 "digest": "sha384", 00:14:15.199 "dhgroup": "ffdhe2048" 00:14:15.199 } 00:14:15.199 } 00:14:15.199 ]' 00:14:15.199 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.199 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.199 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.199 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.200 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.200 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.200 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.200 19:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.200 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:15.200 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:15.768 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.768 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:15.768 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.768 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.768 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.768 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.768 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:15.768 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.026 19:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.285 00:14:16.285 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.285 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.285 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.544 { 00:14:16.544 "cntlid": 63, 00:14:16.544 "qid": 0, 00:14:16.544 "state": "enabled", 00:14:16.544 "thread": "nvmf_tgt_poll_group_000", 00:14:16.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:16.544 "listen_address": { 00:14:16.544 "trtype": "TCP", 00:14:16.544 "adrfam": "IPv4", 00:14:16.544 "traddr": "10.0.0.2", 00:14:16.544 "trsvcid": "4420" 00:14:16.544 }, 00:14:16.544 "peer_address": { 00:14:16.544 "trtype": "TCP", 00:14:16.544 "adrfam": "IPv4", 00:14:16.544 "traddr": "10.0.0.1", 00:14:16.544 "trsvcid": "44654" 00:14:16.544 }, 00:14:16.544 "auth": { 00:14:16.544 "state": "completed", 00:14:16.544 "digest": "sha384", 00:14:16.544 "dhgroup": "ffdhe2048" 00:14:16.544 } 00:14:16.544 } 00:14:16.544 ]' 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.544 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.803 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:16.803 19:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.372 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.373 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.373 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.373 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.373 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.373 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.373 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:17.632 00:14:17.632 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.632 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.632 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.890 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.890 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.890 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.890 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.890 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.890 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.890 { 00:14:17.890 "cntlid": 65, 00:14:17.890 "qid": 0, 00:14:17.890 "state": "enabled", 00:14:17.890 "thread": "nvmf_tgt_poll_group_000", 00:14:17.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:17.890 "listen_address": { 00:14:17.890 "trtype": "TCP", 00:14:17.890 "adrfam": "IPv4", 00:14:17.890 "traddr": "10.0.0.2", 00:14:17.890 "trsvcid": "4420" 00:14:17.890 }, 00:14:17.890 "peer_address": { 00:14:17.890 "trtype": "TCP", 00:14:17.890 "adrfam": "IPv4", 00:14:17.890 "traddr": "10.0.0.1", 00:14:17.890 "trsvcid": "44672" 00:14:17.890 }, 00:14:17.890 "auth": { 00:14:17.890 "state": "completed", 00:14:17.890 "digest": "sha384", 00:14:17.890 "dhgroup": "ffdhe3072" 00:14:17.890 } 00:14:17.890 } 00:14:17.890 ]' 00:14:17.890 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.891 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.891 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.891 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.891 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.891 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.891 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.891 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.150 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:18.150 19:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:18.717 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.717 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:18.717 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.717 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.717 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.717 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.717 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:18.717 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.975 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:19.234 00:14:19.234 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.234 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.234 19:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.234 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.234 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.234 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.234 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.234 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.234 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.234 { 00:14:19.234 "cntlid": 67, 00:14:19.234 "qid": 0, 00:14:19.234 "state": "enabled", 00:14:19.234 "thread": "nvmf_tgt_poll_group_000", 00:14:19.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:19.234 "listen_address": { 00:14:19.234 "trtype": "TCP", 00:14:19.234 "adrfam": "IPv4", 00:14:19.234 "traddr": "10.0.0.2", 00:14:19.234 "trsvcid": "4420" 00:14:19.234 }, 00:14:19.234 "peer_address": { 00:14:19.234 "trtype": "TCP", 00:14:19.234 "adrfam": "IPv4", 00:14:19.234 "traddr": "10.0.0.1", 00:14:19.234 "trsvcid": "44702" 00:14:19.234 }, 00:14:19.234 "auth": { 00:14:19.234 "state": "completed", 00:14:19.234 "digest": "sha384", 00:14:19.234 "dhgroup": "ffdhe3072" 00:14:19.234 } 00:14:19.234 } 00:14:19.234 ]' 00:14:19.235 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.235 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.235 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.235 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:19.235 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.502 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.502 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.502 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.502 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:19.502 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:20.071 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.071 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:20.071 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.071 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.071 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.071 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.071 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:20.071 19:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.330 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.589 00:14:20.589 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.589 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.589 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:20.848 { 00:14:20.848 "cntlid": 69, 00:14:20.848 "qid": 0, 00:14:20.848 "state": "enabled", 00:14:20.848 "thread": "nvmf_tgt_poll_group_000", 00:14:20.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:20.848 "listen_address": { 00:14:20.848 "trtype": "TCP", 00:14:20.848 "adrfam": "IPv4", 00:14:20.848 "traddr": "10.0.0.2", 00:14:20.848 "trsvcid": "4420" 00:14:20.848 }, 00:14:20.848 "peer_address": { 00:14:20.848 "trtype": "TCP", 00:14:20.848 "adrfam": "IPv4", 00:14:20.848 "traddr": "10.0.0.1", 00:14:20.848 "trsvcid": "44738" 00:14:20.848 }, 00:14:20.848 "auth": { 00:14:20.848 "state": "completed", 00:14:20.848 "digest": "sha384", 00:14:20.848 "dhgroup": "ffdhe3072" 00:14:20.848 } 00:14:20.848 } 00:14:20.848 ]' 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.848 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.849 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.108 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:21.108 19:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:21.675 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.676 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.676 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.676 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:21.676 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.676 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:21.935 00:14:21.935 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.935 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.935 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:22.194 { 00:14:22.194 "cntlid": 71, 00:14:22.194 "qid": 0, 00:14:22.194 "state": "enabled", 00:14:22.194 "thread": "nvmf_tgt_poll_group_000", 00:14:22.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:22.194 "listen_address": { 00:14:22.194 "trtype": "TCP", 00:14:22.194 "adrfam": "IPv4", 00:14:22.194 "traddr": "10.0.0.2", 00:14:22.194 "trsvcid": "4420" 00:14:22.194 }, 00:14:22.194 "peer_address": { 00:14:22.194 "trtype": "TCP", 00:14:22.194 "adrfam": "IPv4", 00:14:22.194 "traddr": "10.0.0.1", 00:14:22.194 "trsvcid": "44768" 00:14:22.194 }, 00:14:22.194 "auth": { 00:14:22.194 "state": "completed", 00:14:22.194 "digest": "sha384", 00:14:22.194 "dhgroup": "ffdhe3072" 00:14:22.194 } 00:14:22.194 } 00:14:22.194 ]' 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.194 19:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.454 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:22.454 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.021 19:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.280 00:14:23.280 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.280 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.280 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.539 { 00:14:23.539 "cntlid": 73, 00:14:23.539 "qid": 0, 00:14:23.539 "state": "enabled", 00:14:23.539 "thread": "nvmf_tgt_poll_group_000", 00:14:23.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:23.539 "listen_address": { 00:14:23.539 "trtype": "TCP", 00:14:23.539 "adrfam": "IPv4", 00:14:23.539 "traddr": "10.0.0.2", 00:14:23.539 "trsvcid": "4420" 00:14:23.539 }, 00:14:23.539 "peer_address": { 00:14:23.539 "trtype": "TCP", 00:14:23.539 "adrfam": "IPv4", 00:14:23.539 "traddr": "10.0.0.1", 00:14:23.539 "trsvcid": "53622" 00:14:23.539 }, 00:14:23.539 "auth": { 00:14:23.539 "state": "completed", 00:14:23.539 "digest": "sha384", 00:14:23.539 "dhgroup": "ffdhe4096" 00:14:23.539 } 00:14:23.539 } 00:14:23.539 ]' 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.539 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.799 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:23.799 19:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:24.366 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.366 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:24.366 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.366 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.366 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.366 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.366 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.366 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.625 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.625 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.884 { 00:14:24.884 "cntlid": 75, 00:14:24.884 "qid": 0, 00:14:24.884 "state": "enabled", 00:14:24.884 "thread": "nvmf_tgt_poll_group_000", 00:14:24.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:24.884 "listen_address": { 00:14:24.884 "trtype": "TCP", 00:14:24.884 "adrfam": "IPv4", 00:14:24.884 "traddr": "10.0.0.2", 00:14:24.884 "trsvcid": "4420" 00:14:24.884 }, 00:14:24.884 "peer_address": { 00:14:24.884 "trtype": "TCP", 00:14:24.884 "adrfam": "IPv4", 00:14:24.884 "traddr": "10.0.0.1", 00:14:24.884 "trsvcid": "53652" 00:14:24.884 }, 00:14:24.884 "auth": { 00:14:24.884 "state": "completed", 00:14:24.884 "digest": "sha384", 00:14:24.884 "dhgroup": "ffdhe4096" 00:14:24.884 } 00:14:24.884 } 00:14:24.884 ]' 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:24.884 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.143 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.143 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.143 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.143 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:25.143 19:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:25.710 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.710 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:25.710 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.710 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.710 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.710 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.710 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:25.710 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:25.970 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.229 00:14:26.229 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.229 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:26.229 19:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.487 { 00:14:26.487 "cntlid": 77, 00:14:26.487 "qid": 0, 00:14:26.487 "state": "enabled", 00:14:26.487 "thread": "nvmf_tgt_poll_group_000", 00:14:26.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:26.487 "listen_address": { 00:14:26.487 "trtype": "TCP", 00:14:26.487 "adrfam": "IPv4", 00:14:26.487 "traddr": "10.0.0.2", 00:14:26.487 "trsvcid": "4420" 00:14:26.487 }, 00:14:26.487 "peer_address": { 00:14:26.487 "trtype": "TCP", 00:14:26.487 "adrfam": "IPv4", 00:14:26.487 "traddr": "10.0.0.1", 00:14:26.487 "trsvcid": "53672" 00:14:26.487 }, 00:14:26.487 "auth": { 00:14:26.487 "state": "completed", 00:14:26.487 "digest": "sha384", 00:14:26.487 "dhgroup": "ffdhe4096" 00:14:26.487 } 00:14:26.487 } 00:14:26.487 ]' 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.487 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.746 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:26.746 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:27.314 19:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.314 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:27.314 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.314 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.314 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.314 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.314 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:27.314 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:27.573 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.573 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.833 { 00:14:27.833 "cntlid": 79, 00:14:27.833 "qid": 0, 00:14:27.833 "state": "enabled", 00:14:27.833 "thread": "nvmf_tgt_poll_group_000", 00:14:27.833 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:27.833 "listen_address": { 00:14:27.833 "trtype": "TCP", 00:14:27.833 "adrfam": "IPv4", 00:14:27.833 "traddr": "10.0.0.2", 00:14:27.833 "trsvcid": "4420" 00:14:27.833 }, 00:14:27.833 "peer_address": { 00:14:27.833 "trtype": "TCP", 00:14:27.833 "adrfam": "IPv4", 00:14:27.833 "traddr": "10.0.0.1", 00:14:27.833 "trsvcid": "53708" 00:14:27.833 }, 00:14:27.833 "auth": { 00:14:27.833 "state": "completed", 00:14:27.833 "digest": "sha384", 00:14:27.833 "dhgroup": "ffdhe4096" 00:14:27.833 } 00:14:27.833 } 00:14:27.833 ]' 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.833 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.093 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:28.093 19:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:28.660 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.919 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:29.179 00:14:29.179 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.179 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.179 19:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.438 { 00:14:29.438 "cntlid": 81, 00:14:29.438 "qid": 0, 00:14:29.438 "state": "enabled", 00:14:29.438 "thread": "nvmf_tgt_poll_group_000", 00:14:29.438 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:29.438 "listen_address": { 00:14:29.438 "trtype": "TCP", 00:14:29.438 "adrfam": "IPv4", 00:14:29.438 "traddr": "10.0.0.2", 00:14:29.438 "trsvcid": "4420" 00:14:29.438 }, 00:14:29.438 "peer_address": { 00:14:29.438 "trtype": "TCP", 00:14:29.438 "adrfam": "IPv4", 00:14:29.438 "traddr": "10.0.0.1", 00:14:29.438 "trsvcid": "53730" 00:14:29.438 }, 00:14:29.438 "auth": { 00:14:29.438 "state": "completed", 00:14:29.438 "digest": "sha384", 00:14:29.438 "dhgroup": "ffdhe6144" 00:14:29.438 } 00:14:29.438 } 00:14:29.438 ]' 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.438 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.697 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:29.697 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:30.266 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.266 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:30.266 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.266 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.266 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.266 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.266 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:30.267 19:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.267 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.525 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.525 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.525 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.525 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.785 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.785 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.785 { 00:14:30.785 "cntlid": 83, 00:14:30.785 "qid": 0, 00:14:30.785 "state": "enabled", 00:14:30.785 "thread": "nvmf_tgt_poll_group_000", 00:14:30.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:30.785 "listen_address": { 00:14:30.785 "trtype": "TCP", 00:14:30.785 "adrfam": "IPv4", 00:14:30.785 "traddr": "10.0.0.2", 00:14:30.786 "trsvcid": "4420" 00:14:30.786 }, 00:14:30.786 "peer_address": { 00:14:30.786 "trtype": "TCP", 00:14:30.786 "adrfam": "IPv4", 00:14:30.786 "traddr": "10.0.0.1", 00:14:30.786 "trsvcid": "53760" 00:14:30.786 }, 00:14:30.786 "auth": { 00:14:30.786 "state": "completed", 00:14:30.786 "digest": "sha384", 00:14:30.786 "dhgroup": "ffdhe6144" 00:14:30.786 } 00:14:30.786 } 00:14:30.786 ]' 00:14:30.786 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.047 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.047 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.047 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:31.047 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.047 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.047 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.048 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.048 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:31.048 19:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:31.617 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:31.877 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.136 00:14:32.136 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.136 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.136 19:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:32.395 { 00:14:32.395 "cntlid": 85, 00:14:32.395 "qid": 0, 00:14:32.395 "state": "enabled", 00:14:32.395 "thread": "nvmf_tgt_poll_group_000", 00:14:32.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:32.395 "listen_address": { 00:14:32.395 "trtype": "TCP", 00:14:32.395 "adrfam": "IPv4", 00:14:32.395 "traddr": "10.0.0.2", 00:14:32.395 "trsvcid": "4420" 00:14:32.395 }, 00:14:32.395 "peer_address": { 00:14:32.395 "trtype": "TCP", 00:14:32.395 "adrfam": "IPv4", 00:14:32.395 "traddr": "10.0.0.1", 00:14:32.395 "trsvcid": "54674" 00:14:32.395 }, 00:14:32.395 "auth": { 00:14:32.395 "state": "completed", 00:14:32.395 "digest": "sha384", 00:14:32.395 "dhgroup": "ffdhe6144" 00:14:32.395 } 00:14:32.395 } 00:14:32.395 ]' 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.395 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.653 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:32.654 19:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:33.221 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.221 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:33.221 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.221 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.221 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.221 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:33.221 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:33.221 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:33.480 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.481 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:33.739 00:14:33.739 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.739 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.739 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.999 { 00:14:33.999 "cntlid": 87, 00:14:33.999 "qid": 0, 00:14:33.999 "state": "enabled", 00:14:33.999 "thread": "nvmf_tgt_poll_group_000", 00:14:33.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:33.999 "listen_address": { 00:14:33.999 "trtype": "TCP", 00:14:33.999 "adrfam": "IPv4", 00:14:33.999 "traddr": "10.0.0.2", 00:14:33.999 "trsvcid": "4420" 00:14:33.999 }, 00:14:33.999 "peer_address": { 00:14:33.999 "trtype": "TCP", 00:14:33.999 "adrfam": "IPv4", 00:14:33.999 "traddr": "10.0.0.1", 00:14:33.999 "trsvcid": "54704" 00:14:33.999 }, 00:14:33.999 "auth": { 00:14:33.999 "state": "completed", 00:14:33.999 "digest": "sha384", 00:14:33.999 "dhgroup": "ffdhe6144" 00:14:33.999 } 00:14:33.999 } 00:14:33.999 ]' 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.999 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.258 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:34.258 19:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:34.826 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.085 19:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.344 00:14:35.344 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.344 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.344 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.603 { 00:14:35.603 "cntlid": 89, 00:14:35.603 "qid": 0, 00:14:35.603 "state": "enabled", 00:14:35.603 "thread": "nvmf_tgt_poll_group_000", 00:14:35.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:35.603 "listen_address": { 00:14:35.603 "trtype": "TCP", 00:14:35.603 "adrfam": "IPv4", 00:14:35.603 "traddr": "10.0.0.2", 00:14:35.603 "trsvcid": "4420" 00:14:35.603 }, 00:14:35.603 "peer_address": { 00:14:35.603 "trtype": "TCP", 00:14:35.603 "adrfam": "IPv4", 00:14:35.603 "traddr": "10.0.0.1", 00:14:35.603 "trsvcid": "54730" 00:14:35.603 }, 00:14:35.603 "auth": { 00:14:35.603 "state": "completed", 00:14:35.603 "digest": "sha384", 00:14:35.603 "dhgroup": "ffdhe8192" 00:14:35.603 } 00:14:35.603 } 00:14:35.603 ]' 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.603 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.862 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:35.862 19:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:36.429 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.429 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:36.429 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.429 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.429 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.429 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.429 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:36.429 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.687 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.255 00:14:37.255 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.255 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.255 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.255 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.255 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.255 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.255 19:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.255 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.256 { 00:14:37.256 "cntlid": 91, 00:14:37.256 "qid": 0, 00:14:37.256 "state": "enabled", 00:14:37.256 "thread": "nvmf_tgt_poll_group_000", 00:14:37.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:37.256 "listen_address": { 00:14:37.256 "trtype": "TCP", 00:14:37.256 "adrfam": "IPv4", 00:14:37.256 "traddr": "10.0.0.2", 00:14:37.256 "trsvcid": "4420" 00:14:37.256 }, 00:14:37.256 "peer_address": { 00:14:37.256 "trtype": "TCP", 00:14:37.256 "adrfam": "IPv4", 00:14:37.256 "traddr": "10.0.0.1", 00:14:37.256 "trsvcid": "54748" 00:14:37.256 }, 00:14:37.256 "auth": { 00:14:37.256 "state": "completed", 00:14:37.256 "digest": "sha384", 00:14:37.256 "dhgroup": "ffdhe8192" 00:14:37.256 } 00:14:37.256 } 00:14:37.256 ]' 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.256 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.516 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:37.516 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:38.147 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.147 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:38.147 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.147 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.147 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.147 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.147 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.147 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.458 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:38.458 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.458 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.458 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:38.458 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.458 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.459 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.459 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.459 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.459 19:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.459 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.459 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.459 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.749 00:14:38.749 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:38.749 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:38.749 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.007 { 00:14:39.007 "cntlid": 93, 00:14:39.007 "qid": 0, 00:14:39.007 "state": "enabled", 00:14:39.007 "thread": "nvmf_tgt_poll_group_000", 00:14:39.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:39.007 "listen_address": { 00:14:39.007 "trtype": "TCP", 00:14:39.007 "adrfam": "IPv4", 00:14:39.007 "traddr": "10.0.0.2", 00:14:39.007 "trsvcid": "4420" 00:14:39.007 }, 00:14:39.007 "peer_address": { 00:14:39.007 "trtype": "TCP", 00:14:39.007 "adrfam": "IPv4", 00:14:39.007 "traddr": "10.0.0.1", 00:14:39.007 "trsvcid": "54774" 00:14:39.007 }, 00:14:39.007 "auth": { 00:14:39.007 "state": "completed", 00:14:39.007 "digest": "sha384", 00:14:39.007 "dhgroup": "ffdhe8192" 00:14:39.007 } 00:14:39.007 } 00:14:39.007 ]' 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.007 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.268 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:39.268 19:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:39.836 19:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.405 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.405 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:40.405 { 00:14:40.405 "cntlid": 95, 00:14:40.405 "qid": 0, 00:14:40.405 "state": "enabled", 00:14:40.405 "thread": "nvmf_tgt_poll_group_000", 00:14:40.405 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:40.405 "listen_address": { 00:14:40.405 "trtype": "TCP", 00:14:40.405 "adrfam": "IPv4", 00:14:40.405 "traddr": "10.0.0.2", 00:14:40.405 "trsvcid": "4420" 00:14:40.405 }, 00:14:40.405 "peer_address": { 00:14:40.405 "trtype": "TCP", 00:14:40.405 "adrfam": "IPv4", 00:14:40.405 "traddr": "10.0.0.1", 00:14:40.405 "trsvcid": "54784" 00:14:40.405 }, 00:14:40.405 "auth": { 00:14:40.405 "state": "completed", 00:14:40.405 "digest": "sha384", 00:14:40.405 "dhgroup": "ffdhe8192" 00:14:40.405 } 00:14:40.405 } 00:14:40.405 ]' 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:40.665 19:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:41.232 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.490 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.750 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.750 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.750 { 00:14:41.750 "cntlid": 97, 00:14:41.750 "qid": 0, 00:14:41.750 "state": "enabled", 00:14:41.750 "thread": "nvmf_tgt_poll_group_000", 00:14:41.750 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:41.750 "listen_address": { 00:14:41.750 "trtype": "TCP", 00:14:41.750 "adrfam": "IPv4", 00:14:41.750 "traddr": "10.0.0.2", 00:14:41.750 "trsvcid": "4420" 00:14:41.750 }, 00:14:41.750 "peer_address": { 00:14:41.750 "trtype": "TCP", 00:14:41.750 "adrfam": "IPv4", 00:14:41.750 "traddr": "10.0.0.1", 00:14:41.750 "trsvcid": "54822" 00:14:41.750 }, 00:14:41.750 "auth": { 00:14:41.751 "state": "completed", 00:14:41.751 "digest": "sha512", 00:14:41.751 "dhgroup": "null" 00:14:41.751 } 00:14:41.751 } 00:14:41.751 ]' 00:14:41.751 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:42.010 19:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:42.579 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.579 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:42.579 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.579 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.579 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.579 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.579 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:42.579 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.839 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.099 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.099 { 00:14:43.099 "cntlid": 99, 00:14:43.099 "qid": 0, 00:14:43.099 "state": "enabled", 00:14:43.099 "thread": "nvmf_tgt_poll_group_000", 00:14:43.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:43.099 "listen_address": { 00:14:43.099 "trtype": "TCP", 00:14:43.099 "adrfam": "IPv4", 00:14:43.099 "traddr": "10.0.0.2", 00:14:43.099 "trsvcid": "4420" 00:14:43.099 }, 00:14:43.099 "peer_address": { 00:14:43.099 "trtype": "TCP", 00:14:43.099 "adrfam": "IPv4", 00:14:43.099 "traddr": "10.0.0.1", 00:14:43.099 "trsvcid": "51968" 00:14:43.099 }, 00:14:43.099 "auth": { 00:14:43.099 "state": "completed", 00:14:43.099 "digest": "sha512", 00:14:43.099 "dhgroup": "null" 00:14:43.099 } 00:14:43.099 } 00:14:43.099 ]' 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:43.099 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.358 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:43.358 19:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.358 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.358 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.358 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.358 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:43.358 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:43.926 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.185 19:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.444 00:14:44.444 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.444 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.444 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.704 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.704 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.704 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.704 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.704 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.704 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.704 { 00:14:44.704 "cntlid": 101, 00:14:44.704 "qid": 0, 00:14:44.704 "state": "enabled", 00:14:44.704 "thread": "nvmf_tgt_poll_group_000", 00:14:44.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:44.705 "listen_address": { 00:14:44.705 "trtype": "TCP", 00:14:44.705 "adrfam": "IPv4", 00:14:44.705 "traddr": "10.0.0.2", 00:14:44.705 "trsvcid": "4420" 00:14:44.705 }, 00:14:44.705 "peer_address": { 00:14:44.705 "trtype": "TCP", 00:14:44.705 "adrfam": "IPv4", 00:14:44.705 "traddr": "10.0.0.1", 00:14:44.705 "trsvcid": "52006" 00:14:44.705 }, 00:14:44.705 "auth": { 00:14:44.705 "state": "completed", 00:14:44.705 "digest": "sha512", 00:14:44.705 "dhgroup": "null" 00:14:44.705 } 00:14:44.705 } 00:14:44.705 ]' 00:14:44.705 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.705 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:44.705 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.705 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:44.705 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.705 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.705 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.705 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.964 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:44.964 19:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.532 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.533 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.791 00:14:45.791 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.791 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.791 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.049 { 00:14:46.049 "cntlid": 103, 00:14:46.049 "qid": 0, 00:14:46.049 "state": "enabled", 00:14:46.049 "thread": "nvmf_tgt_poll_group_000", 00:14:46.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:46.049 "listen_address": { 00:14:46.049 "trtype": "TCP", 00:14:46.049 "adrfam": "IPv4", 00:14:46.049 "traddr": "10.0.0.2", 00:14:46.049 "trsvcid": "4420" 00:14:46.049 }, 00:14:46.049 "peer_address": { 00:14:46.049 "trtype": "TCP", 00:14:46.049 "adrfam": "IPv4", 00:14:46.049 "traddr": "10.0.0.1", 00:14:46.049 "trsvcid": "52028" 00:14:46.049 }, 00:14:46.049 "auth": { 00:14:46.049 "state": "completed", 00:14:46.049 "digest": "sha512", 00:14:46.049 "dhgroup": "null" 00:14:46.049 } 00:14:46.049 } 00:14:46.049 ]' 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.049 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.308 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:46.308 19:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.876 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.135 00:14:47.135 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.135 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.135 19:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.394 { 00:14:47.394 "cntlid": 105, 00:14:47.394 "qid": 0, 00:14:47.394 "state": "enabled", 00:14:47.394 "thread": "nvmf_tgt_poll_group_000", 00:14:47.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:47.394 "listen_address": { 00:14:47.394 "trtype": "TCP", 00:14:47.394 "adrfam": "IPv4", 00:14:47.394 "traddr": "10.0.0.2", 00:14:47.394 "trsvcid": "4420" 00:14:47.394 }, 00:14:47.394 "peer_address": { 00:14:47.394 "trtype": "TCP", 00:14:47.394 "adrfam": "IPv4", 00:14:47.394 "traddr": "10.0.0.1", 00:14:47.394 "trsvcid": "52052" 00:14:47.394 }, 00:14:47.394 "auth": { 00:14:47.394 "state": "completed", 00:14:47.394 "digest": "sha512", 00:14:47.394 "dhgroup": "ffdhe2048" 00:14:47.394 } 00:14:47.394 } 00:14:47.394 ]' 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.394 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:47.395 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.395 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.395 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.395 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.653 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:47.653 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:48.220 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.220 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:48.220 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.220 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.220 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.220 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.220 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:48.220 19:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.220 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.478 00:14:48.478 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.478 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.478 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.736 { 00:14:48.736 "cntlid": 107, 00:14:48.736 "qid": 0, 00:14:48.736 "state": "enabled", 00:14:48.736 "thread": "nvmf_tgt_poll_group_000", 00:14:48.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:48.736 "listen_address": { 00:14:48.736 "trtype": "TCP", 00:14:48.736 "adrfam": "IPv4", 00:14:48.736 "traddr": "10.0.0.2", 00:14:48.736 "trsvcid": "4420" 00:14:48.736 }, 00:14:48.736 "peer_address": { 00:14:48.736 "trtype": "TCP", 00:14:48.736 "adrfam": "IPv4", 00:14:48.736 "traddr": "10.0.0.1", 00:14:48.736 "trsvcid": "52062" 00:14:48.736 }, 00:14:48.736 "auth": { 00:14:48.736 "state": "completed", 00:14:48.736 "digest": "sha512", 00:14:48.736 "dhgroup": "ffdhe2048" 00:14:48.736 } 00:14:48.736 } 00:14:48.736 ]' 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.736 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.995 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:48.995 19:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:49.562 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.562 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:49.562 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.562 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.562 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.562 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.562 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:49.562 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.821 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.081 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.081 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.081 { 00:14:50.081 "cntlid": 109, 00:14:50.081 "qid": 0, 00:14:50.081 "state": "enabled", 00:14:50.081 "thread": "nvmf_tgt_poll_group_000", 00:14:50.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:50.081 "listen_address": { 00:14:50.081 "trtype": "TCP", 00:14:50.081 "adrfam": "IPv4", 00:14:50.081 "traddr": "10.0.0.2", 00:14:50.081 "trsvcid": "4420" 00:14:50.081 }, 00:14:50.081 "peer_address": { 00:14:50.081 "trtype": "TCP", 00:14:50.081 "adrfam": "IPv4", 00:14:50.081 "traddr": "10.0.0.1", 00:14:50.081 "trsvcid": "52098" 00:14:50.081 }, 00:14:50.082 "auth": { 00:14:50.082 "state": "completed", 00:14:50.082 "digest": "sha512", 00:14:50.082 "dhgroup": "ffdhe2048" 00:14:50.082 } 00:14:50.082 } 00:14:50.082 ]' 00:14:50.082 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.082 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:50.082 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:50.340 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.340 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:50.340 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.340 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.340 19:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.340 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:50.340 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:50.906 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.165 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:51.165 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.165 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.165 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.165 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:51.165 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:51.165 19:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:51.423 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.423 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.681 { 00:14:51.681 "cntlid": 111, 00:14:51.681 "qid": 0, 00:14:51.681 "state": "enabled", 00:14:51.681 "thread": "nvmf_tgt_poll_group_000", 00:14:51.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:51.681 "listen_address": { 00:14:51.681 "trtype": "TCP", 00:14:51.681 "adrfam": "IPv4", 00:14:51.681 "traddr": "10.0.0.2", 00:14:51.681 "trsvcid": "4420" 00:14:51.681 }, 00:14:51.681 "peer_address": { 00:14:51.681 "trtype": "TCP", 00:14:51.681 "adrfam": "IPv4", 00:14:51.681 "traddr": "10.0.0.1", 00:14:51.681 "trsvcid": "52116" 00:14:51.681 }, 00:14:51.681 "auth": { 00:14:51.681 "state": "completed", 00:14:51.681 "digest": "sha512", 00:14:51.681 "dhgroup": "ffdhe2048" 00:14:51.681 } 00:14:51.681 } 00:14:51.681 ]' 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.681 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.939 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:51.939 19:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:52.505 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:52.762 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.020 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.020 { 00:14:53.020 "cntlid": 113, 00:14:53.020 "qid": 0, 00:14:53.020 "state": "enabled", 00:14:53.020 "thread": "nvmf_tgt_poll_group_000", 00:14:53.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:53.020 "listen_address": { 00:14:53.020 "trtype": "TCP", 00:14:53.020 "adrfam": "IPv4", 00:14:53.020 "traddr": "10.0.0.2", 00:14:53.020 "trsvcid": "4420" 00:14:53.020 }, 00:14:53.020 "peer_address": { 00:14:53.020 "trtype": "TCP", 00:14:53.020 "adrfam": "IPv4", 00:14:53.020 "traddr": "10.0.0.1", 00:14:53.020 "trsvcid": "60168" 00:14:53.020 }, 00:14:53.020 "auth": { 00:14:53.020 "state": "completed", 00:14:53.020 "digest": "sha512", 00:14:53.020 "dhgroup": "ffdhe3072" 00:14:53.020 } 00:14:53.020 } 00:14:53.020 ]' 00:14:53.020 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.277 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:53.277 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.277 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:53.277 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.277 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.277 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.277 19:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.277 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:53.277 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:53.843 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.101 19:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:54.358 00:14:54.358 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.358 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.358 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.616 { 00:14:54.616 "cntlid": 115, 00:14:54.616 "qid": 0, 00:14:54.616 "state": "enabled", 00:14:54.616 "thread": "nvmf_tgt_poll_group_000", 00:14:54.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:54.616 "listen_address": { 00:14:54.616 "trtype": "TCP", 00:14:54.616 "adrfam": "IPv4", 00:14:54.616 "traddr": "10.0.0.2", 00:14:54.616 "trsvcid": "4420" 00:14:54.616 }, 00:14:54.616 "peer_address": { 00:14:54.616 "trtype": "TCP", 00:14:54.616 "adrfam": "IPv4", 00:14:54.616 "traddr": "10.0.0.1", 00:14:54.616 "trsvcid": "60196" 00:14:54.616 }, 00:14:54.616 "auth": { 00:14:54.616 "state": "completed", 00:14:54.616 "digest": "sha512", 00:14:54.616 "dhgroup": "ffdhe3072" 00:14:54.616 } 00:14:54.616 } 00:14:54.616 ]' 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.616 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.874 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:54.874 19:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:14:55.440 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.440 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:55.440 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.440 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.440 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.440 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.440 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:55.440 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:55.697 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.697 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.956 { 00:14:55.956 "cntlid": 117, 00:14:55.956 "qid": 0, 00:14:55.956 "state": "enabled", 00:14:55.956 "thread": "nvmf_tgt_poll_group_000", 00:14:55.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:55.956 "listen_address": { 00:14:55.956 "trtype": "TCP", 00:14:55.956 "adrfam": "IPv4", 00:14:55.956 "traddr": "10.0.0.2", 00:14:55.956 "trsvcid": "4420" 00:14:55.956 }, 00:14:55.956 "peer_address": { 00:14:55.956 "trtype": "TCP", 00:14:55.956 "adrfam": "IPv4", 00:14:55.956 "traddr": "10.0.0.1", 00:14:55.956 "trsvcid": "60210" 00:14:55.956 }, 00:14:55.956 "auth": { 00:14:55.956 "state": "completed", 00:14:55.956 "digest": "sha512", 00:14:55.956 "dhgroup": "ffdhe3072" 00:14:55.956 } 00:14:55.956 } 00:14:55.956 ]' 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.956 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.214 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:56.214 19:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:14:56.781 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.781 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:56.781 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.781 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.781 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.781 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.781 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:56.781 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.040 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:57.297 00:14:57.297 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.297 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.297 19:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.297 { 00:14:57.297 "cntlid": 119, 00:14:57.297 "qid": 0, 00:14:57.297 "state": "enabled", 00:14:57.297 "thread": "nvmf_tgt_poll_group_000", 00:14:57.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:57.297 "listen_address": { 00:14:57.297 "trtype": "TCP", 00:14:57.297 "adrfam": "IPv4", 00:14:57.297 "traddr": "10.0.0.2", 00:14:57.297 "trsvcid": "4420" 00:14:57.297 }, 00:14:57.297 "peer_address": { 00:14:57.297 "trtype": "TCP", 00:14:57.297 "adrfam": "IPv4", 00:14:57.297 "traddr": "10.0.0.1", 00:14:57.297 "trsvcid": "60222" 00:14:57.297 }, 00:14:57.297 "auth": { 00:14:57.297 "state": "completed", 00:14:57.297 "digest": "sha512", 00:14:57.297 "dhgroup": "ffdhe3072" 00:14:57.297 } 00:14:57.297 } 00:14:57.297 ]' 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:57.297 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.556 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.556 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.556 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.556 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:57.556 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:58.123 19:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.382 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.641 00:14:58.641 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.641 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.641 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.901 { 00:14:58.901 "cntlid": 121, 00:14:58.901 "qid": 0, 00:14:58.901 "state": "enabled", 00:14:58.901 "thread": "nvmf_tgt_poll_group_000", 00:14:58.901 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:14:58.901 "listen_address": { 00:14:58.901 "trtype": "TCP", 00:14:58.901 "adrfam": "IPv4", 00:14:58.901 "traddr": "10.0.0.2", 00:14:58.901 "trsvcid": "4420" 00:14:58.901 }, 00:14:58.901 "peer_address": { 00:14:58.901 "trtype": "TCP", 00:14:58.901 "adrfam": "IPv4", 00:14:58.901 "traddr": "10.0.0.1", 00:14:58.901 "trsvcid": "60254" 00:14:58.901 }, 00:14:58.901 "auth": { 00:14:58.901 "state": "completed", 00:14:58.901 "digest": "sha512", 00:14:58.901 "dhgroup": "ffdhe4096" 00:14:58.901 } 00:14:58.901 } 00:14:58.901 ]' 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.901 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.160 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:59.160 19:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.727 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.986 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.986 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.986 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.986 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.986 00:14:59.986 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.986 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.986 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.245 19:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.245 { 00:15:00.245 "cntlid": 123, 00:15:00.245 "qid": 0, 00:15:00.245 "state": "enabled", 00:15:00.245 "thread": "nvmf_tgt_poll_group_000", 00:15:00.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:00.245 "listen_address": { 00:15:00.245 "trtype": "TCP", 00:15:00.245 "adrfam": "IPv4", 00:15:00.245 "traddr": "10.0.0.2", 00:15:00.245 "trsvcid": "4420" 00:15:00.245 }, 00:15:00.245 "peer_address": { 00:15:00.245 "trtype": "TCP", 00:15:00.245 "adrfam": "IPv4", 00:15:00.245 "traddr": "10.0.0.1", 00:15:00.245 "trsvcid": "60278" 00:15:00.245 }, 00:15:00.245 "auth": { 00:15:00.245 "state": "completed", 00:15:00.245 "digest": "sha512", 00:15:00.245 "dhgroup": "ffdhe4096" 00:15:00.245 } 00:15:00.245 } 00:15:00.245 ]' 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.245 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.504 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:15:00.504 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:15:01.072 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.331 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:01.331 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.331 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.331 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.331 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:01.331 19:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.331 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.589 00:15:01.589 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.589 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.589 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:01.848 { 00:15:01.848 "cntlid": 125, 00:15:01.848 "qid": 0, 00:15:01.848 "state": "enabled", 00:15:01.848 "thread": "nvmf_tgt_poll_group_000", 00:15:01.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:01.848 "listen_address": { 00:15:01.848 "trtype": "TCP", 00:15:01.848 "adrfam": "IPv4", 00:15:01.848 "traddr": "10.0.0.2", 00:15:01.848 "trsvcid": "4420" 00:15:01.848 }, 00:15:01.848 "peer_address": { 00:15:01.848 "trtype": "TCP", 00:15:01.848 "adrfam": "IPv4", 00:15:01.848 "traddr": "10.0.0.1", 00:15:01.848 "trsvcid": "60324" 00:15:01.848 }, 00:15:01.848 "auth": { 00:15:01.848 "state": "completed", 00:15:01.848 "digest": "sha512", 00:15:01.848 "dhgroup": "ffdhe4096" 00:15:01.848 } 00:15:01.848 } 00:15:01.848 ]' 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.848 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.107 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:15:02.107 19:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:15:02.673 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.673 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:02.673 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.673 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.673 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.673 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.673 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:02.673 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:02.931 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.190 00:15:03.190 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.190 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.190 19:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.190 { 00:15:03.190 "cntlid": 127, 00:15:03.190 "qid": 0, 00:15:03.190 "state": "enabled", 00:15:03.190 "thread": "nvmf_tgt_poll_group_000", 00:15:03.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:03.190 "listen_address": { 00:15:03.190 "trtype": "TCP", 00:15:03.190 "adrfam": "IPv4", 00:15:03.190 "traddr": "10.0.0.2", 00:15:03.190 "trsvcid": "4420" 00:15:03.190 }, 00:15:03.190 "peer_address": { 00:15:03.190 "trtype": "TCP", 00:15:03.190 "adrfam": "IPv4", 00:15:03.190 "traddr": "10.0.0.1", 00:15:03.190 "trsvcid": "59296" 00:15:03.190 }, 00:15:03.190 "auth": { 00:15:03.190 "state": "completed", 00:15:03.190 "digest": "sha512", 00:15:03.190 "dhgroup": "ffdhe4096" 00:15:03.190 } 00:15:03.190 } 00:15:03.190 ]' 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:03.190 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:03.449 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.449 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.449 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.449 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.449 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.449 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:03.449 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:04.016 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.016 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:04.016 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.016 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.274 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.274 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:04.274 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.274 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:04.274 19:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.274 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.275 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.533 00:15:04.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.533 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.792 { 00:15:04.792 "cntlid": 129, 00:15:04.792 "qid": 0, 00:15:04.792 "state": "enabled", 00:15:04.792 "thread": "nvmf_tgt_poll_group_000", 00:15:04.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:04.792 "listen_address": { 00:15:04.792 "trtype": "TCP", 00:15:04.792 "adrfam": "IPv4", 00:15:04.792 "traddr": "10.0.0.2", 00:15:04.792 "trsvcid": "4420" 00:15:04.792 }, 00:15:04.792 "peer_address": { 00:15:04.792 "trtype": "TCP", 00:15:04.792 "adrfam": "IPv4", 00:15:04.792 "traddr": "10.0.0.1", 00:15:04.792 "trsvcid": "59332" 00:15:04.792 }, 00:15:04.792 "auth": { 00:15:04.792 "state": "completed", 00:15:04.792 "digest": "sha512", 00:15:04.792 "dhgroup": "ffdhe6144" 00:15:04.792 } 00:15:04.792 } 00:15:04.792 ]' 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.792 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.051 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:15:05.051 19:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:15:05.618 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.618 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:05.618 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.618 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.618 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.618 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.618 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:05.618 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.876 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.134 00:15:06.134 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.134 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.134 19:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.392 { 00:15:06.392 "cntlid": 131, 00:15:06.392 "qid": 0, 00:15:06.392 "state": "enabled", 00:15:06.392 "thread": "nvmf_tgt_poll_group_000", 00:15:06.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:06.392 "listen_address": { 00:15:06.392 "trtype": "TCP", 00:15:06.392 "adrfam": "IPv4", 00:15:06.392 "traddr": "10.0.0.2", 00:15:06.392 "trsvcid": "4420" 00:15:06.392 }, 00:15:06.392 "peer_address": { 00:15:06.392 "trtype": "TCP", 00:15:06.392 "adrfam": "IPv4", 00:15:06.392 "traddr": "10.0.0.1", 00:15:06.392 "trsvcid": "59358" 00:15:06.392 }, 00:15:06.392 "auth": { 00:15:06.392 "state": "completed", 00:15:06.392 "digest": "sha512", 00:15:06.392 "dhgroup": "ffdhe6144" 00:15:06.392 } 00:15:06.392 } 00:15:06.392 ]' 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.392 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.651 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:15:06.651 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:15:07.220 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.220 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:07.220 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.220 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.220 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.220 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.220 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:07.220 19:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.479 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.738 00:15:07.738 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.738 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.738 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.997 { 00:15:07.997 "cntlid": 133, 00:15:07.997 "qid": 0, 00:15:07.997 "state": "enabled", 00:15:07.997 "thread": "nvmf_tgt_poll_group_000", 00:15:07.997 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:07.997 "listen_address": { 00:15:07.997 "trtype": "TCP", 00:15:07.997 "adrfam": "IPv4", 00:15:07.997 "traddr": "10.0.0.2", 00:15:07.997 "trsvcid": "4420" 00:15:07.997 }, 00:15:07.997 "peer_address": { 00:15:07.997 "trtype": "TCP", 00:15:07.997 "adrfam": "IPv4", 00:15:07.997 "traddr": "10.0.0.1", 00:15:07.997 "trsvcid": "59402" 00:15:07.997 }, 00:15:07.997 "auth": { 00:15:07.997 "state": "completed", 00:15:07.997 "digest": "sha512", 00:15:07.997 "dhgroup": "ffdhe6144" 00:15:07.997 } 00:15:07.997 } 00:15:07.997 ]' 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.997 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.257 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:15:08.257 19:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:08.825 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:09.393 00:15:09.393 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.393 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.393 19:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.393 { 00:15:09.393 "cntlid": 135, 00:15:09.393 "qid": 0, 00:15:09.393 "state": "enabled", 00:15:09.393 "thread": "nvmf_tgt_poll_group_000", 00:15:09.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:09.393 "listen_address": { 00:15:09.393 "trtype": "TCP", 00:15:09.393 "adrfam": "IPv4", 00:15:09.393 "traddr": "10.0.0.2", 00:15:09.393 "trsvcid": "4420" 00:15:09.393 }, 00:15:09.393 "peer_address": { 00:15:09.393 "trtype": "TCP", 00:15:09.393 "adrfam": "IPv4", 00:15:09.393 "traddr": "10.0.0.1", 00:15:09.393 "trsvcid": "59416" 00:15:09.393 }, 00:15:09.393 "auth": { 00:15:09.393 "state": "completed", 00:15:09.393 "digest": "sha512", 00:15:09.393 "dhgroup": "ffdhe6144" 00:15:09.393 } 00:15:09.393 } 00:15:09.393 ]' 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.393 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.652 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:09.652 19:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.220 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:10.479 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:11.048 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.048 { 00:15:11.048 "cntlid": 137, 00:15:11.048 "qid": 0, 00:15:11.048 "state": "enabled", 00:15:11.048 "thread": "nvmf_tgt_poll_group_000", 00:15:11.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:11.048 "listen_address": { 00:15:11.048 "trtype": "TCP", 00:15:11.048 "adrfam": "IPv4", 00:15:11.048 "traddr": "10.0.0.2", 00:15:11.048 "trsvcid": "4420" 00:15:11.048 }, 00:15:11.048 "peer_address": { 00:15:11.048 "trtype": "TCP", 00:15:11.048 "adrfam": "IPv4", 00:15:11.048 "traddr": "10.0.0.1", 00:15:11.048 "trsvcid": "59438" 00:15:11.048 }, 00:15:11.048 "auth": { 00:15:11.048 "state": "completed", 00:15:11.048 "digest": "sha512", 00:15:11.048 "dhgroup": "ffdhe8192" 00:15:11.048 } 00:15:11.048 } 00:15:11.048 ]' 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.048 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.307 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:11.307 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.307 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.307 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.307 19:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.307 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:15:11.307 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:15:11.875 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.875 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:11.875 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.875 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.134 19:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.702 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.702 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:12.702 { 00:15:12.702 "cntlid": 139, 00:15:12.702 "qid": 0, 00:15:12.702 "state": "enabled", 00:15:12.702 "thread": "nvmf_tgt_poll_group_000", 00:15:12.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:12.702 "listen_address": { 00:15:12.702 "trtype": "TCP", 00:15:12.702 "adrfam": "IPv4", 00:15:12.702 "traddr": "10.0.0.2", 00:15:12.702 "trsvcid": "4420" 00:15:12.702 }, 00:15:12.702 "peer_address": { 00:15:12.702 "trtype": "TCP", 00:15:12.702 "adrfam": "IPv4", 00:15:12.702 "traddr": "10.0.0.1", 00:15:12.702 "trsvcid": "51552" 00:15:12.702 }, 00:15:12.702 "auth": { 00:15:12.702 "state": "completed", 00:15:12.702 "digest": "sha512", 00:15:12.702 "dhgroup": "ffdhe8192" 00:15:12.702 } 00:15:12.702 } 00:15:12.702 ]' 00:15:12.703 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:15:12.962 19:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: --dhchap-ctrl-secret DHHC-1:02:MzU2MTRhYWQyZjU0NGI3YThlMjNhODVmMWM2ZTBlZjY0MDNkMjdmZjAxYTQ4MzA5mSCfwQ==: 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.899 19:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.533 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.533 { 00:15:14.533 "cntlid": 141, 00:15:14.533 "qid": 0, 00:15:14.533 "state": "enabled", 00:15:14.533 "thread": "nvmf_tgt_poll_group_000", 00:15:14.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:14.533 "listen_address": { 00:15:14.533 "trtype": "TCP", 00:15:14.533 "adrfam": "IPv4", 00:15:14.533 "traddr": "10.0.0.2", 00:15:14.533 "trsvcid": "4420" 00:15:14.533 }, 00:15:14.533 "peer_address": { 00:15:14.533 "trtype": "TCP", 00:15:14.533 "adrfam": "IPv4", 00:15:14.533 "traddr": "10.0.0.1", 00:15:14.533 "trsvcid": "51574" 00:15:14.533 }, 00:15:14.533 "auth": { 00:15:14.533 "state": "completed", 00:15:14.533 "digest": "sha512", 00:15:14.533 "dhgroup": "ffdhe8192" 00:15:14.533 } 00:15:14.533 } 00:15:14.533 ]' 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.533 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.534 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.840 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:15:14.840 19:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:01:OGIyYTIwMWJiZDg5ZmYzOTZlMDI5NDU0ZmY0YzdlZTjQEd1M: 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:15.411 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.412 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:15.979 00:15:15.979 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.979 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.979 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.238 { 00:15:16.238 "cntlid": 143, 00:15:16.238 "qid": 0, 00:15:16.238 "state": "enabled", 00:15:16.238 "thread": "nvmf_tgt_poll_group_000", 00:15:16.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:16.238 "listen_address": { 00:15:16.238 "trtype": "TCP", 00:15:16.238 "adrfam": "IPv4", 00:15:16.238 "traddr": "10.0.0.2", 00:15:16.238 "trsvcid": "4420" 00:15:16.238 }, 00:15:16.238 "peer_address": { 00:15:16.238 "trtype": "TCP", 00:15:16.238 "adrfam": "IPv4", 00:15:16.238 "traddr": "10.0.0.1", 00:15:16.238 "trsvcid": "51600" 00:15:16.238 }, 00:15:16.238 "auth": { 00:15:16.238 "state": "completed", 00:15:16.238 "digest": "sha512", 00:15:16.238 "dhgroup": "ffdhe8192" 00:15:16.238 } 00:15:16.238 } 00:15:16.238 ]' 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.238 19:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.497 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:16.497 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.064 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.323 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.323 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.323 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.323 19:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.582 00:15:17.582 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.582 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.582 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.842 { 00:15:17.842 "cntlid": 145, 00:15:17.842 "qid": 0, 00:15:17.842 "state": "enabled", 00:15:17.842 "thread": "nvmf_tgt_poll_group_000", 00:15:17.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:17.842 "listen_address": { 00:15:17.842 "trtype": "TCP", 00:15:17.842 "adrfam": "IPv4", 00:15:17.842 "traddr": "10.0.0.2", 00:15:17.842 "trsvcid": "4420" 00:15:17.842 }, 00:15:17.842 "peer_address": { 00:15:17.842 "trtype": "TCP", 00:15:17.842 "adrfam": "IPv4", 00:15:17.842 "traddr": "10.0.0.1", 00:15:17.842 "trsvcid": "51634" 00:15:17.842 }, 00:15:17.842 "auth": { 00:15:17.842 "state": "completed", 00:15:17.842 "digest": "sha512", 00:15:17.842 "dhgroup": "ffdhe8192" 00:15:17.842 } 00:15:17.842 } 00:15:17.842 ]' 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.842 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.102 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:15:18.102 19:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:00:MDVjOGY0MDU5ODRiMmQzNmM3NzY2N2RkMGE3ZTc4MThlNTgxNDg1NTMxN2ZlYmMyMZh00w==: --dhchap-ctrl-secret DHHC-1:03:Nzc4MDdkOGIwZDRjZjExZDQ5ZTI4ZGM4YmEzNTEyNDk0NGU0MjkxOTg0ODA4ZDAxMDllNDU4N2E3YzExOTk1Yh066Bo=: 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:18.670 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:19.238 request: 00:15:19.238 { 00:15:19.238 "name": "nvme0", 00:15:19.238 "trtype": "tcp", 00:15:19.238 "traddr": "10.0.0.2", 00:15:19.238 "adrfam": "ipv4", 00:15:19.238 "trsvcid": "4420", 00:15:19.238 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:19.238 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:19.238 "prchk_reftag": false, 00:15:19.238 "prchk_guard": false, 00:15:19.238 "hdgst": false, 00:15:19.238 "ddgst": false, 00:15:19.238 "dhchap_key": "key2", 00:15:19.238 "allow_unrecognized_csi": false, 00:15:19.238 "method": "bdev_nvme_attach_controller", 00:15:19.238 "req_id": 1 00:15:19.238 } 00:15:19.238 Got JSON-RPC error response 00:15:19.238 response: 00:15:19.238 { 00:15:19.238 "code": -5, 00:15:19.238 "message": "Input/output error" 00:15:19.238 } 00:15:19.238 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:19.238 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.238 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.238 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.239 19:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:19.498 request: 00:15:19.498 { 00:15:19.498 "name": "nvme0", 00:15:19.498 "trtype": "tcp", 00:15:19.498 "traddr": "10.0.0.2", 00:15:19.498 "adrfam": "ipv4", 00:15:19.498 "trsvcid": "4420", 00:15:19.498 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:19.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:19.498 "prchk_reftag": false, 00:15:19.498 "prchk_guard": false, 00:15:19.498 "hdgst": false, 00:15:19.498 "ddgst": false, 00:15:19.498 "dhchap_key": "key1", 00:15:19.498 "dhchap_ctrlr_key": "ckey2", 00:15:19.498 "allow_unrecognized_csi": false, 00:15:19.498 "method": "bdev_nvme_attach_controller", 00:15:19.498 "req_id": 1 00:15:19.498 } 00:15:19.498 Got JSON-RPC error response 00:15:19.498 response: 00:15:19.498 { 00:15:19.498 "code": -5, 00:15:19.498 "message": "Input/output error" 00:15:19.498 } 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.498 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:19.499 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.066 request: 00:15:20.066 { 00:15:20.066 "name": "nvme0", 00:15:20.066 "trtype": "tcp", 00:15:20.066 "traddr": "10.0.0.2", 00:15:20.066 "adrfam": "ipv4", 00:15:20.066 "trsvcid": "4420", 00:15:20.066 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:20.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:20.066 "prchk_reftag": false, 00:15:20.066 "prchk_guard": false, 00:15:20.066 "hdgst": false, 00:15:20.066 "ddgst": false, 00:15:20.066 "dhchap_key": "key1", 00:15:20.066 "dhchap_ctrlr_key": "ckey1", 00:15:20.066 "allow_unrecognized_csi": false, 00:15:20.066 "method": "bdev_nvme_attach_controller", 00:15:20.066 "req_id": 1 00:15:20.066 } 00:15:20.066 Got JSON-RPC error response 00:15:20.066 response: 00:15:20.066 { 00:15:20.066 "code": -5, 00:15:20.066 "message": "Input/output error" 00:15:20.066 } 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3677962 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3677962 ']' 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3677962 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3677962 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3677962' 00:15:20.066 killing process with pid 3677962 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3677962 00:15:20.066 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3677962 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3704872 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3704872 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3704872 ']' 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.067 19:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3704872 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3704872 ']' 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.326 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.585 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 null0 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.BuV 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.yvz ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yvz 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Gum 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.qKe ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qKe 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KK1 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.oyL ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.oyL 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.6jM 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:20.586 19:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.522 nvme0n1 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:21.522 { 00:15:21.522 "cntlid": 1, 00:15:21.522 "qid": 0, 00:15:21.522 "state": "enabled", 00:15:21.522 "thread": "nvmf_tgt_poll_group_000", 00:15:21.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:21.522 "listen_address": { 00:15:21.522 "trtype": "TCP", 00:15:21.522 "adrfam": "IPv4", 00:15:21.522 "traddr": "10.0.0.2", 00:15:21.522 "trsvcid": "4420" 00:15:21.522 }, 00:15:21.522 "peer_address": { 00:15:21.522 "trtype": "TCP", 00:15:21.522 "adrfam": "IPv4", 00:15:21.522 "traddr": "10.0.0.1", 00:15:21.522 "trsvcid": "51684" 00:15:21.522 }, 00:15:21.522 "auth": { 00:15:21.522 "state": "completed", 00:15:21.522 "digest": "sha512", 00:15:21.522 "dhgroup": "ffdhe8192" 00:15:21.522 } 00:15:21.522 } 00:15:21.522 ]' 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.522 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.523 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.523 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.782 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:21.782 19:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key3 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:22.351 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.610 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.870 request: 00:15:22.870 { 00:15:22.870 "name": "nvme0", 00:15:22.870 "trtype": "tcp", 00:15:22.870 "traddr": "10.0.0.2", 00:15:22.870 "adrfam": "ipv4", 00:15:22.870 "trsvcid": "4420", 00:15:22.870 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:22.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:22.870 "prchk_reftag": false, 00:15:22.870 "prchk_guard": false, 00:15:22.870 "hdgst": false, 00:15:22.870 "ddgst": false, 00:15:22.870 "dhchap_key": "key3", 00:15:22.870 "allow_unrecognized_csi": false, 00:15:22.870 "method": "bdev_nvme_attach_controller", 00:15:22.870 "req_id": 1 00:15:22.870 } 00:15:22.870 Got JSON-RPC error response 00:15:22.870 response: 00:15:22.870 { 00:15:22.870 "code": -5, 00:15:22.870 "message": "Input/output error" 00:15:22.870 } 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.870 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.871 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.871 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:23.130 request: 00:15:23.130 { 00:15:23.130 "name": "nvme0", 00:15:23.130 "trtype": "tcp", 00:15:23.130 "traddr": "10.0.0.2", 00:15:23.130 "adrfam": "ipv4", 00:15:23.130 "trsvcid": "4420", 00:15:23.130 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:23.130 "prchk_reftag": false, 00:15:23.130 "prchk_guard": false, 00:15:23.130 "hdgst": false, 00:15:23.130 "ddgst": false, 00:15:23.130 "dhchap_key": "key3", 00:15:23.130 "allow_unrecognized_csi": false, 00:15:23.130 "method": "bdev_nvme_attach_controller", 00:15:23.130 "req_id": 1 00:15:23.130 } 00:15:23.130 Got JSON-RPC error response 00:15:23.130 response: 00:15:23.130 { 00:15:23.130 "code": -5, 00:15:23.130 "message": "Input/output error" 00:15:23.130 } 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.130 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.390 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.390 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:23.390 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.390 19:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.390 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:23.650 request: 00:15:23.650 { 00:15:23.650 "name": "nvme0", 00:15:23.650 "trtype": "tcp", 00:15:23.650 "traddr": "10.0.0.2", 00:15:23.650 "adrfam": "ipv4", 00:15:23.650 "trsvcid": "4420", 00:15:23.650 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:23.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:23.650 "prchk_reftag": false, 00:15:23.650 "prchk_guard": false, 00:15:23.650 "hdgst": false, 00:15:23.650 "ddgst": false, 00:15:23.650 "dhchap_key": "key0", 00:15:23.650 "dhchap_ctrlr_key": "key1", 00:15:23.650 "allow_unrecognized_csi": false, 00:15:23.650 "method": "bdev_nvme_attach_controller", 00:15:23.650 "req_id": 1 00:15:23.650 } 00:15:23.650 Got JSON-RPC error response 00:15:23.650 response: 00:15:23.650 { 00:15:23.650 "code": -5, 00:15:23.650 "message": "Input/output error" 00:15:23.650 } 00:15:23.650 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:23.650 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:23.650 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:23.650 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:23.650 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:23.650 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:23.650 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:23.650 nvme0n1 00:15:23.909 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:23.909 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:23.909 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.909 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.909 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.909 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.167 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 00:15:24.167 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.167 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.167 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.167 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:24.167 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:24.168 19:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:24.734 nvme0n1 00:15:24.734 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:24.734 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.734 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:24.992 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.992 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:24.992 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.992 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.992 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.992 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:24.992 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:24.992 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.252 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.252 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:25.252 19:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid 801c19ac-fce9-ec11-9bc7-a4bf019282bb -l 0 --dhchap-secret DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: --dhchap-ctrl-secret DHHC-1:03:MTM5M2FhYzgzNDNmYjEzZTRhODczNjNhMDgzNmEwMGUzYTc3M2M4MGIyZWJjNGI3ODdlYjhmMjIzNTVkYWQ3YVJUHvY=: 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:25.833 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:26.092 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:26.092 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:26.092 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.092 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:26.092 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:26.092 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:26.092 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:26.092 19:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:26.352 request: 00:15:26.352 { 00:15:26.352 "name": "nvme0", 00:15:26.352 "trtype": "tcp", 00:15:26.352 "traddr": "10.0.0.2", 00:15:26.352 "adrfam": "ipv4", 00:15:26.352 "trsvcid": "4420", 00:15:26.352 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:26.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb", 00:15:26.352 "prchk_reftag": false, 00:15:26.352 "prchk_guard": false, 00:15:26.352 "hdgst": false, 00:15:26.352 "ddgst": false, 00:15:26.352 "dhchap_key": "key1", 00:15:26.352 "allow_unrecognized_csi": false, 00:15:26.352 "method": "bdev_nvme_attach_controller", 00:15:26.352 "req_id": 1 00:15:26.352 } 00:15:26.352 Got JSON-RPC error response 00:15:26.352 response: 00:15:26.352 { 00:15:26.352 "code": -5, 00:15:26.352 "message": "Input/output error" 00:15:26.352 } 00:15:26.352 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:26.352 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:26.352 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:26.352 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:26.352 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:26.352 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:26.352 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:27.288 nvme0n1 00:15:27.288 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:27.288 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:27.288 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.288 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.288 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.288 19:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:27.547 nvme0n1 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:27.547 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.807 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.807 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.807 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: '' 2s 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: ]] 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDAyYzk1ZmNiNDc0NzViZWZhNDllODVmMjE3M2NlMTGwg6Sm: 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:28.067 19:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: 2s 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: ]] 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGZiNGI3N2NiZjJiZDEyNDJlNmQ5NjM0Y2UyODdhZjM1MzM4YzU2MjE0MjU3MTU0gccb2w==: 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:29.972 19:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:32.511 19:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:32.770 nvme0n1 00:15:32.770 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:32.770 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.770 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.770 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.770 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:32.770 19:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:33.336 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:33.595 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:33.595 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:33.595 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:33.854 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:34.111 request: 00:15:34.111 { 00:15:34.111 "name": "nvme0", 00:15:34.111 "dhchap_key": "key1", 00:15:34.111 "dhchap_ctrlr_key": "key3", 00:15:34.111 "method": "bdev_nvme_set_keys", 00:15:34.111 "req_id": 1 00:15:34.111 } 00:15:34.111 Got JSON-RPC error response 00:15:34.111 response: 00:15:34.111 { 00:15:34.111 "code": -13, 00:15:34.111 "message": "Permission denied" 00:15:34.111 } 00:15:34.111 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:34.111 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:34.111 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:34.111 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:34.111 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:34.111 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:34.111 19:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.369 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:34.369 19:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:35.301 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:35.301 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:35.301 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.558 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:35.558 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:35.558 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.558 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.558 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.558 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:35.559 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:35.559 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:36.123 nvme0n1 00:15:36.123 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:36.123 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.123 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:36.383 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:36.384 19:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:36.642 request: 00:15:36.642 { 00:15:36.642 "name": "nvme0", 00:15:36.643 "dhchap_key": "key2", 00:15:36.643 "dhchap_ctrlr_key": "key0", 00:15:36.643 "method": "bdev_nvme_set_keys", 00:15:36.643 "req_id": 1 00:15:36.643 } 00:15:36.643 Got JSON-RPC error response 00:15:36.643 response: 00:15:36.643 { 00:15:36.643 "code": -13, 00:15:36.643 "message": "Permission denied" 00:15:36.643 } 00:15:36.643 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:36.643 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:36.643 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:36.643 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:36.643 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:36.643 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:36.643 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.902 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:36.902 19:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:37.837 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:37.837 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.837 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3677982 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3677982 ']' 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3677982 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3677982 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3677982' 00:15:38.096 killing process with pid 3677982 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3677982 00:15:38.096 19:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3677982 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:38.355 rmmod nvme_tcp 00:15:38.355 rmmod nvme_fabrics 00:15:38.355 rmmod nvme_keyring 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3704872 ']' 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3704872 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3704872 ']' 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3704872 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3704872 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3704872' 00:15:38.355 killing process with pid 3704872 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3704872 00:15:38.355 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3704872 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.614 19:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.BuV /tmp/spdk.key-sha256.Gum /tmp/spdk.key-sha384.KK1 /tmp/spdk.key-sha512.6jM /tmp/spdk.key-sha512.yvz /tmp/spdk.key-sha384.qKe /tmp/spdk.key-sha256.oyL '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:15:40.518 00:15:40.518 real 2m17.685s 00:15:40.518 user 5m9.481s 00:15:40.518 sys 0m19.615s 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.518 ************************************ 00:15:40.518 END TEST nvmf_auth_target 00:15:40.518 ************************************ 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.518 ************************************ 00:15:40.518 START TEST nvmf_bdevio_no_huge 00:15:40.518 ************************************ 00:15:40.518 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:15:40.778 * Looking for test storage... 00:15:40.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:40.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.778 --rc genhtml_branch_coverage=1 00:15:40.778 --rc genhtml_function_coverage=1 00:15:40.778 --rc genhtml_legend=1 00:15:40.778 --rc geninfo_all_blocks=1 00:15:40.778 --rc geninfo_unexecuted_blocks=1 00:15:40.778 00:15:40.778 ' 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:40.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.778 --rc genhtml_branch_coverage=1 00:15:40.778 --rc genhtml_function_coverage=1 00:15:40.778 --rc genhtml_legend=1 00:15:40.778 --rc geninfo_all_blocks=1 00:15:40.778 --rc geninfo_unexecuted_blocks=1 00:15:40.778 00:15:40.778 ' 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:40.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.778 --rc genhtml_branch_coverage=1 00:15:40.778 --rc genhtml_function_coverage=1 00:15:40.778 --rc genhtml_legend=1 00:15:40.778 --rc geninfo_all_blocks=1 00:15:40.778 --rc geninfo_unexecuted_blocks=1 00:15:40.778 00:15:40.778 ' 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:40.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.778 --rc genhtml_branch_coverage=1 00:15:40.778 --rc genhtml_function_coverage=1 00:15:40.778 --rc genhtml_legend=1 00:15:40.778 --rc geninfo_all_blocks=1 00:15:40.778 --rc geninfo_unexecuted_blocks=1 00:15:40.778 00:15:40.778 ' 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.778 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.779 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:15:40.779 19:22:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:46.049 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:46.049 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:46.050 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:46.050 Found net devices under 0000:31:00.0: cvl_0_0 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:46.050 Found net devices under 0000:31:00.1: cvl_0_1 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.050 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:46.309 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:46.309 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:46.309 19:22:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:46.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:15:46.309 00:15:46.309 --- 10.0.0.2 ping statistics --- 00:15:46.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.309 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:46.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:15:46.309 00:15:46.309 --- 10.0.0.1 ping statistics --- 00:15:46.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.309 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:46.309 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3713178 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3713178 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3713178 ']' 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.569 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.569 [2024-11-26 19:22:20.220627] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:15:46.569 [2024-11-26 19:22:20.220698] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:15:46.569 [2024-11-26 19:22:20.311001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.569 [2024-11-26 19:22:20.357741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.569 [2024-11-26 19:22:20.357786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.569 [2024-11-26 19:22:20.357793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.569 [2024-11-26 19:22:20.357799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.569 [2024-11-26 19:22:20.357804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.569 [2024-11-26 19:22:20.359253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:46.569 [2024-11-26 19:22:20.359511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:46.569 [2024-11-26 19:22:20.359672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.569 [2024-11-26 19:22:20.359672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.829 [2024-11-26 19:22:20.502538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.829 Malloc0 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:46.829 [2024-11-26 19:22:20.539589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:15:46.829 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:15:46.829 { 00:15:46.829 "params": { 00:15:46.829 "name": "Nvme$subsystem", 00:15:46.829 "trtype": "$TEST_TRANSPORT", 00:15:46.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.829 "adrfam": "ipv4", 00:15:46.829 "trsvcid": "$NVMF_PORT", 00:15:46.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.830 "hdgst": ${hdgst:-false}, 00:15:46.830 "ddgst": ${ddgst:-false} 00:15:46.830 }, 00:15:46.830 "method": "bdev_nvme_attach_controller" 00:15:46.830 } 00:15:46.830 EOF 00:15:46.830 )") 00:15:46.830 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:15:46.830 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:15:46.830 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:15:46.830 19:22:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:15:46.830 "params": { 00:15:46.830 "name": "Nvme1", 00:15:46.830 "trtype": "tcp", 00:15:46.830 "traddr": "10.0.0.2", 00:15:46.830 "adrfam": "ipv4", 00:15:46.830 "trsvcid": "4420", 00:15:46.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.830 "hdgst": false, 00:15:46.830 "ddgst": false 00:15:46.830 }, 00:15:46.830 "method": "bdev_nvme_attach_controller" 00:15:46.830 }' 00:15:46.830 [2024-11-26 19:22:20.580788] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:15:46.830 [2024-11-26 19:22:20.580855] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3713420 ] 00:15:46.830 [2024-11-26 19:22:20.668307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:47.089 [2024-11-26 19:22:20.728531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.089 [2024-11-26 19:22:20.728694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:47.089 [2024-11-26 19:22:20.728695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.089 I/O targets: 00:15:47.089 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:47.089 00:15:47.089 00:15:47.089 CUnit - A unit testing framework for C - Version 2.1-3 00:15:47.089 http://cunit.sourceforge.net/ 00:15:47.089 00:15:47.089 00:15:47.089 Suite: bdevio tests on: Nvme1n1 00:15:47.348 Test: blockdev write read block ...passed 00:15:47.348 Test: blockdev write zeroes read block ...passed 00:15:47.348 Test: blockdev write zeroes read no split ...passed 00:15:47.348 Test: blockdev write zeroes read split ...passed 00:15:47.348 Test: blockdev write zeroes read split partial ...passed 00:15:47.348 Test: blockdev reset ...[2024-11-26 19:22:21.037448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:47.348 [2024-11-26 19:22:21.037509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed8f70 (9): Bad file descriptor 00:15:47.348 [2024-11-26 19:22:21.054995] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:47.348 passed 00:15:47.348 Test: blockdev write read 8 blocks ...passed 00:15:47.348 Test: blockdev write read size > 128k ...passed 00:15:47.348 Test: blockdev write read invalid size ...passed 00:15:47.348 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:47.348 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:47.348 Test: blockdev write read max offset ...passed 00:15:47.607 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:47.607 Test: blockdev writev readv 8 blocks ...passed 00:15:47.607 Test: blockdev writev readv 30 x 1block ...passed 00:15:47.607 Test: blockdev writev readv block ...passed 00:15:47.607 Test: blockdev writev readv size > 128k ...passed 00:15:47.607 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:47.607 Test: blockdev comparev and writev ...[2024-11-26 19:22:21.318123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.607 [2024-11-26 19:22:21.318156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.318173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.607 [2024-11-26 19:22:21.318181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.318638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.607 [2024-11-26 19:22:21.318650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.318664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.607 [2024-11-26 19:22:21.318673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.319106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.607 [2024-11-26 19:22:21.319118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.319132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.607 [2024-11-26 19:22:21.319140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.319603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.607 [2024-11-26 19:22:21.319615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.319628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:47.607 [2024-11-26 19:22:21.319636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:47.607 passed 00:15:47.607 Test: blockdev nvme passthru rw ...passed 00:15:47.607 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:22:21.403012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.607 [2024-11-26 19:22:21.403028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.403292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.607 [2024-11-26 19:22:21.403304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.403543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.607 [2024-11-26 19:22:21.403554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:47.607 [2024-11-26 19:22:21.403815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:47.607 [2024-11-26 19:22:21.403826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:47.607 passed 00:15:47.607 Test: blockdev nvme admin passthru ...passed 00:15:47.607 Test: blockdev copy ...passed 00:15:47.607 00:15:47.607 Run Summary: Type Total Ran Passed Failed Inactive 00:15:47.607 suites 1 1 n/a 0 0 00:15:47.607 tests 23 23 23 0 0 00:15:47.607 asserts 152 152 152 0 n/a 00:15:47.607 00:15:47.607 Elapsed time = 1.111 seconds 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.867 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.867 rmmod nvme_tcp 00:15:48.126 rmmod nvme_fabrics 00:15:48.126 rmmod nvme_keyring 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3713178 ']' 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3713178 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3713178 ']' 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3713178 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3713178 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3713178' 00:15:48.126 killing process with pid 3713178 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3713178 00:15:48.126 19:22:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3713178 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:48.386 19:22:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.296 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:50.296 00:15:50.296 real 0m9.768s 00:15:50.296 user 0m10.086s 00:15:50.296 sys 0m5.058s 00:15:50.296 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.296 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:15:50.296 ************************************ 00:15:50.296 END TEST nvmf_bdevio_no_huge 00:15:50.296 ************************************ 00:15:50.296 19:22:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:50.297 19:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.297 19:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.297 19:22:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.297 ************************************ 00:15:50.297 START TEST nvmf_tls 00:15:50.297 ************************************ 00:15:50.297 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:15:50.556 * Looking for test storage... 00:15:50.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.556 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:50.556 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:15:50.556 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:50.556 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:50.556 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.556 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.556 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.556 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:50.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.557 --rc genhtml_branch_coverage=1 00:15:50.557 --rc genhtml_function_coverage=1 00:15:50.557 --rc genhtml_legend=1 00:15:50.557 --rc geninfo_all_blocks=1 00:15:50.557 --rc geninfo_unexecuted_blocks=1 00:15:50.557 00:15:50.557 ' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:50.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.557 --rc genhtml_branch_coverage=1 00:15:50.557 --rc genhtml_function_coverage=1 00:15:50.557 --rc genhtml_legend=1 00:15:50.557 --rc geninfo_all_blocks=1 00:15:50.557 --rc geninfo_unexecuted_blocks=1 00:15:50.557 00:15:50.557 ' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:50.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.557 --rc genhtml_branch_coverage=1 00:15:50.557 --rc genhtml_function_coverage=1 00:15:50.557 --rc genhtml_legend=1 00:15:50.557 --rc geninfo_all_blocks=1 00:15:50.557 --rc geninfo_unexecuted_blocks=1 00:15:50.557 00:15:50.557 ' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:50.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.557 --rc genhtml_branch_coverage=1 00:15:50.557 --rc genhtml_function_coverage=1 00:15:50.557 --rc genhtml_legend=1 00:15:50.557 --rc geninfo_all_blocks=1 00:15:50.557 --rc geninfo_unexecuted_blocks=1 00:15:50.557 00:15:50.557 ' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:50.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:50.557 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:50.558 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.558 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:50.558 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.558 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:50.558 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:50.558 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:15:50.558 19:22:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:55.831 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.831 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:55.832 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:55.832 Found net devices under 0000:31:00.0: cvl_0_0 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:55.832 Found net devices under 0000:31:00.1: cvl_0_1 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.832 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:56.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:15:56.091 00:15:56.091 --- 10.0.0.2 ping statistics --- 00:15:56.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.091 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:56.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:15:56.091 00:15:56.091 --- 10.0.0.1 ping statistics --- 00:15:56.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.091 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.091 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3718119 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3718119 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3718119 ']' 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.092 19:22:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.092 [2024-11-26 19:22:29.924035] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:15:56.092 [2024-11-26 19:22:29.924112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.351 [2024-11-26 19:22:30.020304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.351 [2024-11-26 19:22:30.072946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:56.351 [2024-11-26 19:22:30.073001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:56.351 [2024-11-26 19:22:30.073011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:56.351 [2024-11-26 19:22:30.073018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:56.351 [2024-11-26 19:22:30.073024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:56.351 [2024-11-26 19:22:30.073853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:56.919 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.919 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:56.919 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:56.919 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:56.919 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:56.919 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.919 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:15:56.919 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:15:57.177 true 00:15:57.177 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:57.177 19:22:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:15:57.436 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:15:57.436 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:15:57.436 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:57.436 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:57.436 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:15:57.694 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:15:57.694 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:15:57.694 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:15:57.694 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:57.694 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:15:57.953 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:15:57.953 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:15:57.954 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:57.954 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:15:58.213 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:15:58.213 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:15:58.213 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:15:58.213 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:58.213 19:22:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:15:58.471 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:15:58.471 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:15:58.471 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:15:58.471 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:15:58.471 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.xit6rdGDM2 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:15:58.731 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.jFBz4v5hhh 00:15:58.732 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:15:58.732 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:15:58.732 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.xit6rdGDM2 00:15:58.732 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.jFBz4v5hhh 00:15:58.732 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:15:58.991 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:15:59.250 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.xit6rdGDM2 00:15:59.250 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xit6rdGDM2 00:15:59.250 19:22:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:59.250 [2024-11-26 19:22:33.036173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.250 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:59.509 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:59.509 [2024-11-26 19:22:33.344912] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:59.509 [2024-11-26 19:22:33.345132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:59.509 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:59.767 malloc0 00:15:59.767 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:00.026 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xit6rdGDM2 00:16:00.026 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:00.284 19:22:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xit6rdGDM2 00:16:10.361 Initializing NVMe Controllers 00:16:10.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:10.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:10.361 Initialization complete. Launching workers. 00:16:10.361 ======================================================== 00:16:10.361 Latency(us) 00:16:10.361 Device Information : IOPS MiB/s Average min max 00:16:10.361 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18724.95 73.14 3418.12 1099.37 3998.88 00:16:10.361 ======================================================== 00:16:10.361 Total : 18724.95 73.14 3418.12 1099.37 3998.88 00:16:10.361 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xit6rdGDM2 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xit6rdGDM2 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3721378 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3721378 /var/tmp/bdevperf.sock 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3721378 ']' 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:10.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:10.361 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:10.361 [2024-11-26 19:22:44.114175] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:10.361 [2024-11-26 19:22:44.114227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3721378 ] 00:16:10.361 [2024-11-26 19:22:44.191840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.622 [2024-11-26 19:22:44.227032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.191 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:11.191 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:11.191 19:22:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xit6rdGDM2 00:16:11.191 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:11.451 [2024-11-26 19:22:45.183658] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:11.451 TLSTESTn1 00:16:11.451 19:22:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:11.711 Running I/O for 10 seconds... 00:16:13.598 3706.00 IOPS, 14.48 MiB/s [2024-11-26T18:22:48.406Z] 4172.00 IOPS, 16.30 MiB/s [2024-11-26T18:22:49.349Z] 3970.67 IOPS, 15.51 MiB/s [2024-11-26T18:22:50.738Z] 3910.00 IOPS, 15.27 MiB/s [2024-11-26T18:22:51.681Z] 4030.20 IOPS, 15.74 MiB/s [2024-11-26T18:22:52.624Z] 4231.67 IOPS, 16.53 MiB/s [2024-11-26T18:22:53.567Z] 4191.00 IOPS, 16.37 MiB/s [2024-11-26T18:22:54.511Z] 4207.88 IOPS, 16.44 MiB/s [2024-11-26T18:22:55.453Z] 4234.11 IOPS, 16.54 MiB/s [2024-11-26T18:22:55.453Z] 4318.20 IOPS, 16.87 MiB/s 00:16:21.588 Latency(us) 00:16:21.588 [2024-11-26T18:22:55.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.588 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:21.588 Verification LBA range: start 0x0 length 0x2000 00:16:21.588 TLSTESTn1 : 10.06 4303.35 16.81 0.00 0.00 29635.77 6253.23 67283.63 00:16:21.588 [2024-11-26T18:22:55.453Z] =================================================================================================================== 00:16:21.588 [2024-11-26T18:22:55.453Z] Total : 4303.35 16.81 0.00 0.00 29635.77 6253.23 67283.63 00:16:21.588 { 00:16:21.588 "results": [ 00:16:21.588 { 00:16:21.588 "job": "TLSTESTn1", 00:16:21.588 "core_mask": "0x4", 00:16:21.588 "workload": "verify", 00:16:21.588 "status": "finished", 00:16:21.588 "verify_range": { 00:16:21.588 "start": 0, 00:16:21.588 "length": 8192 00:16:21.588 }, 00:16:21.588 "queue_depth": 128, 00:16:21.588 "io_size": 4096, 00:16:21.588 "runtime": 10.064015, 00:16:21.588 "iops": 4303.352091585714, 00:16:21.588 "mibps": 16.809969107756697, 00:16:21.588 "io_failed": 0, 00:16:21.588 "io_timeout": 0, 00:16:21.588 "avg_latency_us": 29635.768210764512, 00:16:21.588 "min_latency_us": 6253.2266666666665, 00:16:21.588 "max_latency_us": 67283.62666666666 00:16:21.588 } 00:16:21.588 ], 00:16:21.588 "core_count": 1 00:16:21.588 } 00:16:21.588 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:21.588 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3721378 00:16:21.588 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3721378 ']' 00:16:21.588 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3721378 00:16:21.588 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:21.588 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.588 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3721378 00:16:21.848 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:21.848 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:21.848 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3721378' 00:16:21.848 killing process with pid 3721378 00:16:21.848 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3721378 00:16:21.848 Received shutdown signal, test time was about 10.000000 seconds 00:16:21.848 00:16:21.848 Latency(us) 00:16:21.848 [2024-11-26T18:22:55.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.848 [2024-11-26T18:22:55.713Z] =================================================================================================================== 00:16:21.848 [2024-11-26T18:22:55.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:21.848 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3721378 00:16:21.848 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jFBz4v5hhh 00:16:21.848 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:21.848 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jFBz4v5hhh 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jFBz4v5hhh 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jFBz4v5hhh 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3723874 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3723874 /var/tmp/bdevperf.sock 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3723874 ']' 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:21.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:21.849 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:21.849 [2024-11-26 19:22:55.622143] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:21.849 [2024-11-26 19:22:55.622200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3723874 ] 00:16:21.849 [2024-11-26 19:22:55.686729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.109 [2024-11-26 19:22:55.715535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.109 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.109 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:22.109 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jFBz4v5hhh 00:16:22.109 19:22:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:22.368 [2024-11-26 19:22:56.077570] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:22.368 [2024-11-26 19:22:56.082155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:22.368 [2024-11-26 19:22:56.082762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1a990 (107): Transport endpoint is not connected 00:16:22.368 [2024-11-26 19:22:56.083756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1a990 (9): Bad file descriptor 00:16:22.368 [2024-11-26 19:22:56.084758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:22.368 [2024-11-26 19:22:56.084767] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:22.369 [2024-11-26 19:22:56.084773] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:22.369 [2024-11-26 19:22:56.084779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:22.369 request: 00:16:22.369 { 00:16:22.369 "name": "TLSTEST", 00:16:22.369 "trtype": "tcp", 00:16:22.369 "traddr": "10.0.0.2", 00:16:22.369 "adrfam": "ipv4", 00:16:22.369 "trsvcid": "4420", 00:16:22.369 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.369 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.369 "prchk_reftag": false, 00:16:22.369 "prchk_guard": false, 00:16:22.369 "hdgst": false, 00:16:22.369 "ddgst": false, 00:16:22.369 "psk": "key0", 00:16:22.369 "allow_unrecognized_csi": false, 00:16:22.369 "method": "bdev_nvme_attach_controller", 00:16:22.369 "req_id": 1 00:16:22.369 } 00:16:22.369 Got JSON-RPC error response 00:16:22.369 response: 00:16:22.369 { 00:16:22.369 "code": -5, 00:16:22.369 "message": "Input/output error" 00:16:22.369 } 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3723874 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3723874 ']' 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3723874 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3723874 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3723874' 00:16:22.369 killing process with pid 3723874 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3723874 00:16:22.369 Received shutdown signal, test time was about 10.000000 seconds 00:16:22.369 00:16:22.369 Latency(us) 00:16:22.369 [2024-11-26T18:22:56.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.369 [2024-11-26T18:22:56.234Z] =================================================================================================================== 00:16:22.369 [2024-11-26T18:22:56.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.369 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3723874 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xit6rdGDM2 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xit6rdGDM2 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xit6rdGDM2 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xit6rdGDM2 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3724124 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3724124 /var/tmp/bdevperf.sock 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3724124 ']' 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:22.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:22.628 [2024-11-26 19:22:56.274279] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:22.628 [2024-11-26 19:22:56.274332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724124 ] 00:16:22.628 [2024-11-26 19:22:56.339075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.628 [2024-11-26 19:22:56.366208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:22.628 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xit6rdGDM2 00:16:22.887 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:16:22.887 [2024-11-26 19:22:56.732244] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:22.887 [2024-11-26 19:22:56.742424] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:22.887 [2024-11-26 19:22:56.742444] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:22.887 [2024-11-26 19:22:56.742463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:22.887 [2024-11-26 19:22:56.742505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfb990 (107): Transport endpoint is not connected 00:16:22.887 [2024-11-26 19:22:56.743494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfb990 (9): Bad file descriptor 00:16:22.887 [2024-11-26 19:22:56.744497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:16:22.887 [2024-11-26 19:22:56.744505] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:22.887 [2024-11-26 19:22:56.744511] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:16:22.887 [2024-11-26 19:22:56.744517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:16:22.887 request: 00:16:22.887 { 00:16:22.887 "name": "TLSTEST", 00:16:22.887 "trtype": "tcp", 00:16:22.887 "traddr": "10.0.0.2", 00:16:22.887 "adrfam": "ipv4", 00:16:22.887 "trsvcid": "4420", 00:16:22.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.887 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:22.887 "prchk_reftag": false, 00:16:22.887 "prchk_guard": false, 00:16:22.887 "hdgst": false, 00:16:22.887 "ddgst": false, 00:16:22.887 "psk": "key0", 00:16:22.887 "allow_unrecognized_csi": false, 00:16:22.887 "method": "bdev_nvme_attach_controller", 00:16:22.887 "req_id": 1 00:16:22.887 } 00:16:22.887 Got JSON-RPC error response 00:16:22.887 response: 00:16:22.887 { 00:16:22.887 "code": -5, 00:16:22.887 "message": "Input/output error" 00:16:22.887 } 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3724124 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3724124 ']' 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3724124 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724124 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724124' 00:16:23.147 killing process with pid 3724124 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3724124 00:16:23.147 Received shutdown signal, test time was about 10.000000 seconds 00:16:23.147 00:16:23.147 Latency(us) 00:16:23.147 [2024-11-26T18:22:57.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.147 [2024-11-26T18:22:57.012Z] =================================================================================================================== 00:16:23.147 [2024-11-26T18:22:57.012Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3724124 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xit6rdGDM2 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xit6rdGDM2 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xit6rdGDM2 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:23.147 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xit6rdGDM2 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3724143 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3724143 /var/tmp/bdevperf.sock 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3724143 ']' 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.148 19:22:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:23.148 [2024-11-26 19:22:56.935605] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:23.148 [2024-11-26 19:22:56.935659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724143 ] 00:16:23.148 [2024-11-26 19:22:57.000563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.408 [2024-11-26 19:22:57.028704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.408 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.408 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:23.408 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xit6rdGDM2 00:16:23.408 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:23.667 [2024-11-26 19:22:57.395005] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:23.667 [2024-11-26 19:22:57.404031] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:23.667 [2024-11-26 19:22:57.404050] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:23.667 [2024-11-26 19:22:57.404069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:23.667 [2024-11-26 19:22:57.404103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a7990 (107): Transport endpoint is not connected 00:16:23.667 [2024-11-26 19:22:57.405088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a7990 (9): Bad file descriptor 00:16:23.667 [2024-11-26 19:22:57.406091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:16:23.667 [2024-11-26 19:22:57.406102] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:23.667 [2024-11-26 19:22:57.406108] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:16:23.667 [2024-11-26 19:22:57.406114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:16:23.667 request: 00:16:23.667 { 00:16:23.667 "name": "TLSTEST", 00:16:23.667 "trtype": "tcp", 00:16:23.667 "traddr": "10.0.0.2", 00:16:23.667 "adrfam": "ipv4", 00:16:23.667 "trsvcid": "4420", 00:16:23.667 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:23.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:23.667 "prchk_reftag": false, 00:16:23.667 "prchk_guard": false, 00:16:23.667 "hdgst": false, 00:16:23.667 "ddgst": false, 00:16:23.667 "psk": "key0", 00:16:23.667 "allow_unrecognized_csi": false, 00:16:23.667 "method": "bdev_nvme_attach_controller", 00:16:23.667 "req_id": 1 00:16:23.667 } 00:16:23.667 Got JSON-RPC error response 00:16:23.667 response: 00:16:23.667 { 00:16:23.667 "code": -5, 00:16:23.667 "message": "Input/output error" 00:16:23.667 } 00:16:23.667 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3724143 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3724143 ']' 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3724143 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724143 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724143' 00:16:23.668 killing process with pid 3724143 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3724143 00:16:23.668 Received shutdown signal, test time was about 10.000000 seconds 00:16:23.668 00:16:23.668 Latency(us) 00:16:23.668 [2024-11-26T18:22:57.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.668 [2024-11-26T18:22:57.533Z] =================================================================================================================== 00:16:23.668 [2024-11-26T18:22:57.533Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:23.668 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3724143 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3724474 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3724474 /var/tmp/bdevperf.sock 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3724474 ']' 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:23.927 [2024-11-26 19:22:57.598876] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:23.927 [2024-11-26 19:22:57.598930] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724474 ] 00:16:23.927 [2024-11-26 19:22:57.663463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.927 [2024-11-26 19:22:57.690423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:23.927 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:16:24.186 [2024-11-26 19:22:57.899930] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:16:24.186 [2024-11-26 19:22:57.899955] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:24.186 request: 00:16:24.186 { 00:16:24.186 "name": "key0", 00:16:24.186 "path": "", 00:16:24.186 "method": "keyring_file_add_key", 00:16:24.186 "req_id": 1 00:16:24.186 } 00:16:24.186 Got JSON-RPC error response 00:16:24.186 response: 00:16:24.186 { 00:16:24.186 "code": -1, 00:16:24.186 "message": "Operation not permitted" 00:16:24.186 } 00:16:24.186 19:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:24.446 [2024-11-26 19:22:58.056402] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:24.446 [2024-11-26 19:22:58.056423] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:24.446 request: 00:16:24.446 { 00:16:24.446 "name": "TLSTEST", 00:16:24.446 "trtype": "tcp", 00:16:24.446 "traddr": "10.0.0.2", 00:16:24.446 "adrfam": "ipv4", 00:16:24.446 "trsvcid": "4420", 00:16:24.446 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.446 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.446 "prchk_reftag": false, 00:16:24.446 "prchk_guard": false, 00:16:24.446 "hdgst": false, 00:16:24.446 "ddgst": false, 00:16:24.446 "psk": "key0", 00:16:24.446 "allow_unrecognized_csi": false, 00:16:24.446 "method": "bdev_nvme_attach_controller", 00:16:24.446 "req_id": 1 00:16:24.446 } 00:16:24.446 Got JSON-RPC error response 00:16:24.446 response: 00:16:24.446 { 00:16:24.446 "code": -126, 00:16:24.446 "message": "Required key not available" 00:16:24.446 } 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3724474 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3724474 ']' 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3724474 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724474 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724474' 00:16:24.446 killing process with pid 3724474 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3724474 00:16:24.446 Received shutdown signal, test time was about 10.000000 seconds 00:16:24.446 00:16:24.446 Latency(us) 00:16:24.446 [2024-11-26T18:22:58.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.446 [2024-11-26T18:22:58.311Z] =================================================================================================================== 00:16:24.446 [2024-11-26T18:22:58.311Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3724474 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3718119 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3718119 ']' 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3718119 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3718119 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3718119' 00:16:24.446 killing process with pid 3718119 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3718119 00:16:24.446 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3718119 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.rKxa3CyWst 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.rKxa3CyWst 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3724503 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3724503 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3724503 ']' 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.705 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:24.705 [2024-11-26 19:22:58.449076] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:24.705 [2024-11-26 19:22:58.449134] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:24.706 [2024-11-26 19:22:58.520534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.706 [2024-11-26 19:22:58.547639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:24.706 [2024-11-26 19:22:58.547667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:24.706 [2024-11-26 19:22:58.547673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:24.706 [2024-11-26 19:22:58.547678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:24.706 [2024-11-26 19:22:58.547682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:24.706 [2024-11-26 19:22:58.548166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.rKxa3CyWst 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rKxa3CyWst 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:24.964 [2024-11-26 19:22:58.787250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.964 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:25.223 19:22:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:25.481 [2024-11-26 19:22:59.100017] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:25.481 [2024-11-26 19:22:59.100218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.481 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:25.481 malloc0 00:16:25.481 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:25.739 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:25.739 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rKxa3CyWst 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rKxa3CyWst 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3724856 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3724856 /var/tmp/bdevperf.sock 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3724856 ']' 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:25.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.998 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:25.998 [2024-11-26 19:22:59.775003] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:25.998 [2024-11-26 19:22:59.775055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3724856 ] 00:16:25.998 [2024-11-26 19:22:59.839158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.257 [2024-11-26 19:22:59.868003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.257 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.257 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:26.257 19:22:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:26.257 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:26.516 [2024-11-26 19:23:00.238042] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:26.516 TLSTESTn1 00:16:26.516 19:23:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:26.776 Running I/O for 10 seconds... 00:16:28.654 5703.00 IOPS, 22.28 MiB/s [2024-11-26T18:23:03.457Z] 5789.00 IOPS, 22.61 MiB/s [2024-11-26T18:23:04.838Z] 5965.67 IOPS, 23.30 MiB/s [2024-11-26T18:23:05.777Z] 5887.50 IOPS, 23.00 MiB/s [2024-11-26T18:23:06.714Z] 5862.80 IOPS, 22.90 MiB/s [2024-11-26T18:23:07.651Z] 5912.67 IOPS, 23.10 MiB/s [2024-11-26T18:23:08.590Z] 6006.14 IOPS, 23.46 MiB/s [2024-11-26T18:23:09.531Z] 5944.88 IOPS, 23.22 MiB/s [2024-11-26T18:23:10.471Z] 5933.11 IOPS, 23.18 MiB/s [2024-11-26T18:23:10.471Z] 5898.30 IOPS, 23.04 MiB/s 00:16:36.606 Latency(us) 00:16:36.606 [2024-11-26T18:23:10.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.606 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:36.606 Verification LBA range: start 0x0 length 0x2000 00:16:36.606 TLSTESTn1 : 10.03 5893.33 23.02 0.00 0.00 21679.59 5352.11 32112.64 00:16:36.606 [2024-11-26T18:23:10.471Z] =================================================================================================================== 00:16:36.606 [2024-11-26T18:23:10.471Z] Total : 5893.33 23.02 0.00 0.00 21679.59 5352.11 32112.64 00:16:36.606 { 00:16:36.606 "results": [ 00:16:36.606 { 00:16:36.606 "job": "TLSTESTn1", 00:16:36.606 "core_mask": "0x4", 00:16:36.606 "workload": "verify", 00:16:36.606 "status": "finished", 00:16:36.606 "verify_range": { 00:16:36.606 "start": 0, 00:16:36.606 "length": 8192 00:16:36.606 }, 00:16:36.606 "queue_depth": 128, 00:16:36.606 "io_size": 4096, 00:16:36.606 "runtime": 10.029981, 00:16:36.606 "iops": 5893.331203718133, 00:16:36.606 "mibps": 23.020825014523957, 00:16:36.606 "io_failed": 0, 00:16:36.606 "io_timeout": 0, 00:16:36.606 "avg_latency_us": 21679.592353239725, 00:16:36.606 "min_latency_us": 5352.106666666667, 00:16:36.606 "max_latency_us": 32112.64 00:16:36.606 } 00:16:36.606 ], 00:16:36.606 "core_count": 1 00:16:36.606 } 00:16:36.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:36.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3724856 00:16:36.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3724856 ']' 00:16:36.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3724856 00:16:36.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:36.606 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724856 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724856' 00:16:36.866 killing process with pid 3724856 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3724856 00:16:36.866 Received shutdown signal, test time was about 10.000000 seconds 00:16:36.866 00:16:36.866 Latency(us) 00:16:36.866 [2024-11-26T18:23:10.731Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.866 [2024-11-26T18:23:10.731Z] =================================================================================================================== 00:16:36.866 [2024-11-26T18:23:10.731Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3724856 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.rKxa3CyWst 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rKxa3CyWst 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rKxa3CyWst 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.rKxa3CyWst 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.rKxa3CyWst 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3727196 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3727196 /var/tmp/bdevperf.sock 00:16:36.866 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:36.867 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3727196 ']' 00:16:36.867 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:36.867 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.867 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:36.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:36.867 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.867 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:36.867 [2024-11-26 19:23:10.723659] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:36.867 [2024-11-26 19:23:10.723712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3727196 ] 00:16:37.126 [2024-11-26 19:23:10.788046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.126 [2024-11-26 19:23:10.816282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.126 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.126 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:37.126 19:23:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:37.385 [2024-11-26 19:23:11.026104] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rKxa3CyWst': 0100666 00:16:37.385 [2024-11-26 19:23:11.026130] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:37.385 request: 00:16:37.385 { 00:16:37.385 "name": "key0", 00:16:37.385 "path": "/tmp/tmp.rKxa3CyWst", 00:16:37.385 "method": "keyring_file_add_key", 00:16:37.385 "req_id": 1 00:16:37.385 } 00:16:37.385 Got JSON-RPC error response 00:16:37.385 response: 00:16:37.385 { 00:16:37.385 "code": -1, 00:16:37.385 "message": "Operation not permitted" 00:16:37.385 } 00:16:37.385 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:37.385 [2024-11-26 19:23:11.186577] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:37.386 [2024-11-26 19:23:11.186602] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:16:37.386 request: 00:16:37.386 { 00:16:37.386 "name": "TLSTEST", 00:16:37.386 "trtype": "tcp", 00:16:37.386 "traddr": "10.0.0.2", 00:16:37.386 "adrfam": "ipv4", 00:16:37.386 "trsvcid": "4420", 00:16:37.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:37.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:37.386 "prchk_reftag": false, 00:16:37.386 "prchk_guard": false, 00:16:37.386 "hdgst": false, 00:16:37.386 "ddgst": false, 00:16:37.386 "psk": "key0", 00:16:37.386 "allow_unrecognized_csi": false, 00:16:37.386 "method": "bdev_nvme_attach_controller", 00:16:37.386 "req_id": 1 00:16:37.386 } 00:16:37.386 Got JSON-RPC error response 00:16:37.386 response: 00:16:37.386 { 00:16:37.386 "code": -126, 00:16:37.386 "message": "Required key not available" 00:16:37.386 } 00:16:37.386 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3727196 00:16:37.386 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3727196 ']' 00:16:37.386 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3727196 00:16:37.386 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:37.386 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.386 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727196 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727196' 00:16:37.646 killing process with pid 3727196 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3727196 00:16:37.646 Received shutdown signal, test time was about 10.000000 seconds 00:16:37.646 00:16:37.646 Latency(us) 00:16:37.646 [2024-11-26T18:23:11.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.646 [2024-11-26T18:23:11.511Z] =================================================================================================================== 00:16:37.646 [2024-11-26T18:23:11.511Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3727196 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3724503 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3724503 ']' 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3724503 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3724503 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3724503' 00:16:37.646 killing process with pid 3724503 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3724503 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3724503 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3727535 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3727535 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3727535 ']' 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:37.646 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.905 [2024-11-26 19:23:11.541790] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:37.905 [2024-11-26 19:23:11.541841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.905 [2024-11-26 19:23:11.612594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.905 [2024-11-26 19:23:11.640215] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.905 [2024-11-26 19:23:11.640244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.905 [2024-11-26 19:23:11.640250] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.905 [2024-11-26 19:23:11.640255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.905 [2024-11-26 19:23:11.640259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.905 [2024-11-26 19:23:11.640751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.rKxa3CyWst 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.rKxa3CyWst 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:16:37.905 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:37.906 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.rKxa3CyWst 00:16:37.906 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rKxa3CyWst 00:16:37.906 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:38.164 [2024-11-26 19:23:11.879883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:38.164 19:23:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:38.424 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:38.424 [2024-11-26 19:23:12.192648] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:38.424 [2024-11-26 19:23:12.192849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.424 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:38.684 malloc0 00:16:38.684 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:38.684 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:38.944 [2024-11-26 19:23:12.663599] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rKxa3CyWst': 0100666 00:16:38.944 [2024-11-26 19:23:12.663618] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:16:38.944 request: 00:16:38.944 { 00:16:38.944 "name": "key0", 00:16:38.944 "path": "/tmp/tmp.rKxa3CyWst", 00:16:38.944 "method": "keyring_file_add_key", 00:16:38.944 "req_id": 1 00:16:38.944 } 00:16:38.944 Got JSON-RPC error response 00:16:38.944 response: 00:16:38.944 { 00:16:38.944 "code": -1, 00:16:38.944 "message": "Operation not permitted" 00:16:38.944 } 00:16:38.944 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:39.203 [2024-11-26 19:23:12.820009] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:16:39.203 [2024-11-26 19:23:12.820035] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:16:39.203 request: 00:16:39.203 { 00:16:39.203 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:39.203 "host": "nqn.2016-06.io.spdk:host1", 00:16:39.203 "psk": "key0", 00:16:39.203 "method": "nvmf_subsystem_add_host", 00:16:39.203 "req_id": 1 00:16:39.203 } 00:16:39.203 Got JSON-RPC error response 00:16:39.203 response: 00:16:39.203 { 00:16:39.203 "code": -32603, 00:16:39.203 "message": "Internal error" 00:16:39.203 } 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3727535 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3727535 ']' 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3727535 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727535 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727535' 00:16:39.203 killing process with pid 3727535 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3727535 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3727535 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.rKxa3CyWst 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3727903 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3727903 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3727903 ']' 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:39.203 19:23:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.204 [2024-11-26 19:23:13.034848] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:39.204 [2024-11-26 19:23:13.034899] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.463 [2024-11-26 19:23:13.109149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.463 [2024-11-26 19:23:13.136725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:39.463 [2024-11-26 19:23:13.136756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:39.463 [2024-11-26 19:23:13.136763] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:39.463 [2024-11-26 19:23:13.136770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:39.463 [2024-11-26 19:23:13.136775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:39.463 [2024-11-26 19:23:13.137255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.rKxa3CyWst 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rKxa3CyWst 00:16:39.463 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:39.722 [2024-11-26 19:23:13.380118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:39.722 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:39.722 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:39.983 [2024-11-26 19:23:13.692873] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:39.983 [2024-11-26 19:23:13.693068] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.983 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:40.242 malloc0 00:16:40.242 19:23:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:40.242 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3728258 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3728258 /var/tmp/bdevperf.sock 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3728258 ']' 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.501 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.502 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:40.502 [2024-11-26 19:23:14.364790] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:40.502 [2024-11-26 19:23:14.364840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728258 ] 00:16:40.760 [2024-11-26 19:23:14.429483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.760 [2024-11-26 19:23:14.458052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.760 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.760 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:40.760 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:41.018 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:41.018 [2024-11-26 19:23:14.824131] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:41.276 TLSTESTn1 00:16:41.276 19:23:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:16:41.537 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:16:41.537 "subsystems": [ 00:16:41.537 { 00:16:41.537 "subsystem": "keyring", 00:16:41.537 "config": [ 00:16:41.537 { 00:16:41.537 "method": "keyring_file_add_key", 00:16:41.537 "params": { 00:16:41.537 "name": "key0", 00:16:41.537 "path": "/tmp/tmp.rKxa3CyWst" 00:16:41.537 } 00:16:41.537 } 00:16:41.537 ] 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "subsystem": "iobuf", 00:16:41.537 "config": [ 00:16:41.537 { 00:16:41.537 "method": "iobuf_set_options", 00:16:41.537 "params": { 00:16:41.537 "small_pool_count": 8192, 00:16:41.537 "large_pool_count": 1024, 00:16:41.537 "small_bufsize": 8192, 00:16:41.537 "large_bufsize": 135168, 00:16:41.537 "enable_numa": false 00:16:41.537 } 00:16:41.537 } 00:16:41.537 ] 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "subsystem": "sock", 00:16:41.537 "config": [ 00:16:41.537 { 00:16:41.537 "method": "sock_set_default_impl", 00:16:41.537 "params": { 00:16:41.537 "impl_name": "posix" 00:16:41.537 } 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "method": "sock_impl_set_options", 00:16:41.537 "params": { 00:16:41.537 "impl_name": "ssl", 00:16:41.537 "recv_buf_size": 4096, 00:16:41.537 "send_buf_size": 4096, 00:16:41.537 "enable_recv_pipe": true, 00:16:41.537 "enable_quickack": false, 00:16:41.537 "enable_placement_id": 0, 00:16:41.537 "enable_zerocopy_send_server": true, 00:16:41.537 "enable_zerocopy_send_client": false, 00:16:41.537 "zerocopy_threshold": 0, 00:16:41.537 "tls_version": 0, 00:16:41.537 "enable_ktls": false 00:16:41.537 } 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "method": "sock_impl_set_options", 00:16:41.537 "params": { 00:16:41.537 "impl_name": "posix", 00:16:41.537 "recv_buf_size": 2097152, 00:16:41.537 "send_buf_size": 2097152, 00:16:41.537 "enable_recv_pipe": true, 00:16:41.537 "enable_quickack": false, 00:16:41.537 "enable_placement_id": 0, 00:16:41.537 "enable_zerocopy_send_server": true, 00:16:41.537 "enable_zerocopy_send_client": false, 00:16:41.537 "zerocopy_threshold": 0, 00:16:41.537 "tls_version": 0, 00:16:41.537 "enable_ktls": false 00:16:41.537 } 00:16:41.537 } 00:16:41.537 ] 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "subsystem": "vmd", 00:16:41.537 "config": [] 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "subsystem": "accel", 00:16:41.537 "config": [ 00:16:41.537 { 00:16:41.537 "method": "accel_set_options", 00:16:41.537 "params": { 00:16:41.537 "small_cache_size": 128, 00:16:41.537 "large_cache_size": 16, 00:16:41.537 "task_count": 2048, 00:16:41.537 "sequence_count": 2048, 00:16:41.537 "buf_count": 2048 00:16:41.537 } 00:16:41.537 } 00:16:41.537 ] 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "subsystem": "bdev", 00:16:41.537 "config": [ 00:16:41.537 { 00:16:41.537 "method": "bdev_set_options", 00:16:41.537 "params": { 00:16:41.537 "bdev_io_pool_size": 65535, 00:16:41.537 "bdev_io_cache_size": 256, 00:16:41.537 "bdev_auto_examine": true, 00:16:41.537 "iobuf_small_cache_size": 128, 00:16:41.537 "iobuf_large_cache_size": 16 00:16:41.537 } 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "method": "bdev_raid_set_options", 00:16:41.537 "params": { 00:16:41.537 "process_window_size_kb": 1024, 00:16:41.537 "process_max_bandwidth_mb_sec": 0 00:16:41.537 } 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "method": "bdev_iscsi_set_options", 00:16:41.537 "params": { 00:16:41.537 "timeout_sec": 30 00:16:41.537 } 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "method": "bdev_nvme_set_options", 00:16:41.537 "params": { 00:16:41.537 "action_on_timeout": "none", 00:16:41.537 "timeout_us": 0, 00:16:41.537 "timeout_admin_us": 0, 00:16:41.537 "keep_alive_timeout_ms": 10000, 00:16:41.537 "arbitration_burst": 0, 00:16:41.537 "low_priority_weight": 0, 00:16:41.537 "medium_priority_weight": 0, 00:16:41.537 "high_priority_weight": 0, 00:16:41.537 "nvme_adminq_poll_period_us": 10000, 00:16:41.537 "nvme_ioq_poll_period_us": 0, 00:16:41.537 "io_queue_requests": 0, 00:16:41.537 "delay_cmd_submit": true, 00:16:41.537 "transport_retry_count": 4, 00:16:41.537 "bdev_retry_count": 3, 00:16:41.537 "transport_ack_timeout": 0, 00:16:41.537 "ctrlr_loss_timeout_sec": 0, 00:16:41.537 "reconnect_delay_sec": 0, 00:16:41.537 "fast_io_fail_timeout_sec": 0, 00:16:41.537 "disable_auto_failback": false, 00:16:41.537 "generate_uuids": false, 00:16:41.537 "transport_tos": 0, 00:16:41.537 "nvme_error_stat": false, 00:16:41.537 "rdma_srq_size": 0, 00:16:41.537 "io_path_stat": false, 00:16:41.537 "allow_accel_sequence": false, 00:16:41.537 "rdma_max_cq_size": 0, 00:16:41.537 "rdma_cm_event_timeout_ms": 0, 00:16:41.537 "dhchap_digests": [ 00:16:41.537 "sha256", 00:16:41.537 "sha384", 00:16:41.537 "sha512" 00:16:41.537 ], 00:16:41.537 "dhchap_dhgroups": [ 00:16:41.537 "null", 00:16:41.537 "ffdhe2048", 00:16:41.537 "ffdhe3072", 00:16:41.537 "ffdhe4096", 00:16:41.537 "ffdhe6144", 00:16:41.537 "ffdhe8192" 00:16:41.537 ] 00:16:41.537 } 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "method": "bdev_nvme_set_hotplug", 00:16:41.537 "params": { 00:16:41.537 "period_us": 100000, 00:16:41.537 "enable": false 00:16:41.537 } 00:16:41.537 }, 00:16:41.537 { 00:16:41.537 "method": "bdev_malloc_create", 00:16:41.537 "params": { 00:16:41.537 "name": "malloc0", 00:16:41.538 "num_blocks": 8192, 00:16:41.538 "block_size": 4096, 00:16:41.538 "physical_block_size": 4096, 00:16:41.538 "uuid": "2c8e5d84-1a40-4d3e-b1a5-5d3cd1518496", 00:16:41.538 "optimal_io_boundary": 0, 00:16:41.538 "md_size": 0, 00:16:41.538 "dif_type": 0, 00:16:41.538 "dif_is_head_of_md": false, 00:16:41.538 "dif_pi_format": 0 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "bdev_wait_for_examine" 00:16:41.538 } 00:16:41.538 ] 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "subsystem": "nbd", 00:16:41.538 "config": [] 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "subsystem": "scheduler", 00:16:41.538 "config": [ 00:16:41.538 { 00:16:41.538 "method": "framework_set_scheduler", 00:16:41.538 "params": { 00:16:41.538 "name": "static" 00:16:41.538 } 00:16:41.538 } 00:16:41.538 ] 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "subsystem": "nvmf", 00:16:41.538 "config": [ 00:16:41.538 { 00:16:41.538 "method": "nvmf_set_config", 00:16:41.538 "params": { 00:16:41.538 "discovery_filter": "match_any", 00:16:41.538 "admin_cmd_passthru": { 00:16:41.538 "identify_ctrlr": false 00:16:41.538 }, 00:16:41.538 "dhchap_digests": [ 00:16:41.538 "sha256", 00:16:41.538 "sha384", 00:16:41.538 "sha512" 00:16:41.538 ], 00:16:41.538 "dhchap_dhgroups": [ 00:16:41.538 "null", 00:16:41.538 "ffdhe2048", 00:16:41.538 "ffdhe3072", 00:16:41.538 "ffdhe4096", 00:16:41.538 "ffdhe6144", 00:16:41.538 "ffdhe8192" 00:16:41.538 ] 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "nvmf_set_max_subsystems", 00:16:41.538 "params": { 00:16:41.538 "max_subsystems": 1024 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "nvmf_set_crdt", 00:16:41.538 "params": { 00:16:41.538 "crdt1": 0, 00:16:41.538 "crdt2": 0, 00:16:41.538 "crdt3": 0 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "nvmf_create_transport", 00:16:41.538 "params": { 00:16:41.538 "trtype": "TCP", 00:16:41.538 "max_queue_depth": 128, 00:16:41.538 "max_io_qpairs_per_ctrlr": 127, 00:16:41.538 "in_capsule_data_size": 4096, 00:16:41.538 "max_io_size": 131072, 00:16:41.538 "io_unit_size": 131072, 00:16:41.538 "max_aq_depth": 128, 00:16:41.538 "num_shared_buffers": 511, 00:16:41.538 "buf_cache_size": 4294967295, 00:16:41.538 "dif_insert_or_strip": false, 00:16:41.538 "zcopy": false, 00:16:41.538 "c2h_success": false, 00:16:41.538 "sock_priority": 0, 00:16:41.538 "abort_timeout_sec": 1, 00:16:41.538 "ack_timeout": 0, 00:16:41.538 "data_wr_pool_size": 0 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "nvmf_create_subsystem", 00:16:41.538 "params": { 00:16:41.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.538 "allow_any_host": false, 00:16:41.538 "serial_number": "SPDK00000000000001", 00:16:41.538 "model_number": "SPDK bdev Controller", 00:16:41.538 "max_namespaces": 10, 00:16:41.538 "min_cntlid": 1, 00:16:41.538 "max_cntlid": 65519, 00:16:41.538 "ana_reporting": false 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "nvmf_subsystem_add_host", 00:16:41.538 "params": { 00:16:41.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.538 "host": "nqn.2016-06.io.spdk:host1", 00:16:41.538 "psk": "key0" 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "nvmf_subsystem_add_ns", 00:16:41.538 "params": { 00:16:41.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.538 "namespace": { 00:16:41.538 "nsid": 1, 00:16:41.538 "bdev_name": "malloc0", 00:16:41.538 "nguid": "2C8E5D841A404D3EB1A55D3CD1518496", 00:16:41.538 "uuid": "2c8e5d84-1a40-4d3e-b1a5-5d3cd1518496", 00:16:41.538 "no_auto_visible": false 00:16:41.538 } 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "nvmf_subsystem_add_listener", 00:16:41.538 "params": { 00:16:41.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.538 "listen_address": { 00:16:41.538 "trtype": "TCP", 00:16:41.538 "adrfam": "IPv4", 00:16:41.538 "traddr": "10.0.0.2", 00:16:41.538 "trsvcid": "4420" 00:16:41.538 }, 00:16:41.538 "secure_channel": true 00:16:41.538 } 00:16:41.538 } 00:16:41.538 ] 00:16:41.538 } 00:16:41.538 ] 00:16:41.538 }' 00:16:41.538 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:41.538 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:16:41.538 "subsystems": [ 00:16:41.538 { 00:16:41.538 "subsystem": "keyring", 00:16:41.538 "config": [ 00:16:41.538 { 00:16:41.538 "method": "keyring_file_add_key", 00:16:41.538 "params": { 00:16:41.538 "name": "key0", 00:16:41.538 "path": "/tmp/tmp.rKxa3CyWst" 00:16:41.538 } 00:16:41.538 } 00:16:41.538 ] 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "subsystem": "iobuf", 00:16:41.538 "config": [ 00:16:41.538 { 00:16:41.538 "method": "iobuf_set_options", 00:16:41.538 "params": { 00:16:41.538 "small_pool_count": 8192, 00:16:41.538 "large_pool_count": 1024, 00:16:41.538 "small_bufsize": 8192, 00:16:41.538 "large_bufsize": 135168, 00:16:41.538 "enable_numa": false 00:16:41.538 } 00:16:41.538 } 00:16:41.538 ] 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "subsystem": "sock", 00:16:41.538 "config": [ 00:16:41.538 { 00:16:41.538 "method": "sock_set_default_impl", 00:16:41.538 "params": { 00:16:41.538 "impl_name": "posix" 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "sock_impl_set_options", 00:16:41.538 "params": { 00:16:41.538 "impl_name": "ssl", 00:16:41.538 "recv_buf_size": 4096, 00:16:41.538 "send_buf_size": 4096, 00:16:41.538 "enable_recv_pipe": true, 00:16:41.538 "enable_quickack": false, 00:16:41.538 "enable_placement_id": 0, 00:16:41.538 "enable_zerocopy_send_server": true, 00:16:41.538 "enable_zerocopy_send_client": false, 00:16:41.538 "zerocopy_threshold": 0, 00:16:41.538 "tls_version": 0, 00:16:41.538 "enable_ktls": false 00:16:41.538 } 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "method": "sock_impl_set_options", 00:16:41.538 "params": { 00:16:41.538 "impl_name": "posix", 00:16:41.538 "recv_buf_size": 2097152, 00:16:41.538 "send_buf_size": 2097152, 00:16:41.538 "enable_recv_pipe": true, 00:16:41.538 "enable_quickack": false, 00:16:41.538 "enable_placement_id": 0, 00:16:41.538 "enable_zerocopy_send_server": true, 00:16:41.538 "enable_zerocopy_send_client": false, 00:16:41.538 "zerocopy_threshold": 0, 00:16:41.538 "tls_version": 0, 00:16:41.538 "enable_ktls": false 00:16:41.538 } 00:16:41.538 } 00:16:41.538 ] 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "subsystem": "vmd", 00:16:41.538 "config": [] 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "subsystem": "accel", 00:16:41.538 "config": [ 00:16:41.538 { 00:16:41.538 "method": "accel_set_options", 00:16:41.538 "params": { 00:16:41.538 "small_cache_size": 128, 00:16:41.538 "large_cache_size": 16, 00:16:41.538 "task_count": 2048, 00:16:41.538 "sequence_count": 2048, 00:16:41.538 "buf_count": 2048 00:16:41.538 } 00:16:41.538 } 00:16:41.538 ] 00:16:41.538 }, 00:16:41.538 { 00:16:41.538 "subsystem": "bdev", 00:16:41.538 "config": [ 00:16:41.538 { 00:16:41.538 "method": "bdev_set_options", 00:16:41.538 "params": { 00:16:41.538 "bdev_io_pool_size": 65535, 00:16:41.538 "bdev_io_cache_size": 256, 00:16:41.538 "bdev_auto_examine": true, 00:16:41.538 "iobuf_small_cache_size": 128, 00:16:41.538 "iobuf_large_cache_size": 16 00:16:41.539 } 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "method": "bdev_raid_set_options", 00:16:41.539 "params": { 00:16:41.539 "process_window_size_kb": 1024, 00:16:41.539 "process_max_bandwidth_mb_sec": 0 00:16:41.539 } 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "method": "bdev_iscsi_set_options", 00:16:41.539 "params": { 00:16:41.539 "timeout_sec": 30 00:16:41.539 } 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "method": "bdev_nvme_set_options", 00:16:41.539 "params": { 00:16:41.539 "action_on_timeout": "none", 00:16:41.539 "timeout_us": 0, 00:16:41.539 "timeout_admin_us": 0, 00:16:41.539 "keep_alive_timeout_ms": 10000, 00:16:41.539 "arbitration_burst": 0, 00:16:41.539 "low_priority_weight": 0, 00:16:41.539 "medium_priority_weight": 0, 00:16:41.539 "high_priority_weight": 0, 00:16:41.539 "nvme_adminq_poll_period_us": 10000, 00:16:41.539 "nvme_ioq_poll_period_us": 0, 00:16:41.539 "io_queue_requests": 512, 00:16:41.539 "delay_cmd_submit": true, 00:16:41.539 "transport_retry_count": 4, 00:16:41.539 "bdev_retry_count": 3, 00:16:41.539 "transport_ack_timeout": 0, 00:16:41.539 "ctrlr_loss_timeout_sec": 0, 00:16:41.539 "reconnect_delay_sec": 0, 00:16:41.539 "fast_io_fail_timeout_sec": 0, 00:16:41.539 "disable_auto_failback": false, 00:16:41.539 "generate_uuids": false, 00:16:41.539 "transport_tos": 0, 00:16:41.539 "nvme_error_stat": false, 00:16:41.539 "rdma_srq_size": 0, 00:16:41.539 "io_path_stat": false, 00:16:41.539 "allow_accel_sequence": false, 00:16:41.539 "rdma_max_cq_size": 0, 00:16:41.539 "rdma_cm_event_timeout_ms": 0, 00:16:41.539 "dhchap_digests": [ 00:16:41.539 "sha256", 00:16:41.539 "sha384", 00:16:41.539 "sha512" 00:16:41.539 ], 00:16:41.539 "dhchap_dhgroups": [ 00:16:41.539 "null", 00:16:41.539 "ffdhe2048", 00:16:41.539 "ffdhe3072", 00:16:41.539 "ffdhe4096", 00:16:41.539 "ffdhe6144", 00:16:41.539 "ffdhe8192" 00:16:41.539 ] 00:16:41.539 } 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "method": "bdev_nvme_attach_controller", 00:16:41.539 "params": { 00:16:41.539 "name": "TLSTEST", 00:16:41.539 "trtype": "TCP", 00:16:41.539 "adrfam": "IPv4", 00:16:41.539 "traddr": "10.0.0.2", 00:16:41.539 "trsvcid": "4420", 00:16:41.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.539 "prchk_reftag": false, 00:16:41.539 "prchk_guard": false, 00:16:41.539 "ctrlr_loss_timeout_sec": 0, 00:16:41.539 "reconnect_delay_sec": 0, 00:16:41.539 "fast_io_fail_timeout_sec": 0, 00:16:41.539 "psk": "key0", 00:16:41.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.539 "hdgst": false, 00:16:41.539 "ddgst": false, 00:16:41.539 "multipath": "multipath" 00:16:41.539 } 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "method": "bdev_nvme_set_hotplug", 00:16:41.539 "params": { 00:16:41.539 "period_us": 100000, 00:16:41.539 "enable": false 00:16:41.539 } 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "method": "bdev_wait_for_examine" 00:16:41.539 } 00:16:41.539 ] 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "subsystem": "nbd", 00:16:41.539 "config": [] 00:16:41.539 } 00:16:41.539 ] 00:16:41.539 }' 00:16:41.539 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3728258 00:16:41.539 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3728258 ']' 00:16:41.539 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3728258 00:16:41.539 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:41.539 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.539 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3728258 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3728258' 00:16:41.799 killing process with pid 3728258 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3728258 00:16:41.799 Received shutdown signal, test time was about 10.000000 seconds 00:16:41.799 00:16:41.799 Latency(us) 00:16:41.799 [2024-11-26T18:23:15.664Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.799 [2024-11-26T18:23:15.664Z] =================================================================================================================== 00:16:41.799 [2024-11-26T18:23:15.664Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3728258 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3727903 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3727903 ']' 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3727903 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3727903 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3727903' 00:16:41.799 killing process with pid 3727903 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3727903 00:16:41.799 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3727903 00:16:42.060 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:16:42.060 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:42.060 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:42.060 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.060 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:16:42.060 "subsystems": [ 00:16:42.060 { 00:16:42.060 "subsystem": "keyring", 00:16:42.060 "config": [ 00:16:42.060 { 00:16:42.060 "method": "keyring_file_add_key", 00:16:42.060 "params": { 00:16:42.060 "name": "key0", 00:16:42.060 "path": "/tmp/tmp.rKxa3CyWst" 00:16:42.060 } 00:16:42.060 } 00:16:42.060 ] 00:16:42.060 }, 00:16:42.060 { 00:16:42.060 "subsystem": "iobuf", 00:16:42.060 "config": [ 00:16:42.060 { 00:16:42.060 "method": "iobuf_set_options", 00:16:42.060 "params": { 00:16:42.060 "small_pool_count": 8192, 00:16:42.060 "large_pool_count": 1024, 00:16:42.060 "small_bufsize": 8192, 00:16:42.060 "large_bufsize": 135168, 00:16:42.060 "enable_numa": false 00:16:42.060 } 00:16:42.060 } 00:16:42.060 ] 00:16:42.060 }, 00:16:42.060 { 00:16:42.060 "subsystem": "sock", 00:16:42.060 "config": [ 00:16:42.060 { 00:16:42.060 "method": "sock_set_default_impl", 00:16:42.060 "params": { 00:16:42.060 "impl_name": "posix" 00:16:42.060 } 00:16:42.060 }, 00:16:42.060 { 00:16:42.060 "method": "sock_impl_set_options", 00:16:42.060 "params": { 00:16:42.060 "impl_name": "ssl", 00:16:42.060 "recv_buf_size": 4096, 00:16:42.060 "send_buf_size": 4096, 00:16:42.060 "enable_recv_pipe": true, 00:16:42.060 "enable_quickack": false, 00:16:42.060 "enable_placement_id": 0, 00:16:42.060 "enable_zerocopy_send_server": true, 00:16:42.060 "enable_zerocopy_send_client": false, 00:16:42.060 "zerocopy_threshold": 0, 00:16:42.060 "tls_version": 0, 00:16:42.060 "enable_ktls": false 00:16:42.060 } 00:16:42.060 }, 00:16:42.060 { 00:16:42.060 "method": "sock_impl_set_options", 00:16:42.060 "params": { 00:16:42.060 "impl_name": "posix", 00:16:42.060 "recv_buf_size": 2097152, 00:16:42.060 "send_buf_size": 2097152, 00:16:42.060 "enable_recv_pipe": true, 00:16:42.060 "enable_quickack": false, 00:16:42.060 "enable_placement_id": 0, 00:16:42.060 "enable_zerocopy_send_server": true, 00:16:42.060 "enable_zerocopy_send_client": false, 00:16:42.060 "zerocopy_threshold": 0, 00:16:42.060 "tls_version": 0, 00:16:42.060 "enable_ktls": false 00:16:42.060 } 00:16:42.060 } 00:16:42.060 ] 00:16:42.060 }, 00:16:42.060 { 00:16:42.060 "subsystem": "vmd", 00:16:42.060 "config": [] 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "subsystem": "accel", 00:16:42.061 "config": [ 00:16:42.061 { 00:16:42.061 "method": "accel_set_options", 00:16:42.061 "params": { 00:16:42.061 "small_cache_size": 128, 00:16:42.061 "large_cache_size": 16, 00:16:42.061 "task_count": 2048, 00:16:42.061 "sequence_count": 2048, 00:16:42.061 "buf_count": 2048 00:16:42.061 } 00:16:42.061 } 00:16:42.061 ] 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "subsystem": "bdev", 00:16:42.061 "config": [ 00:16:42.061 { 00:16:42.061 "method": "bdev_set_options", 00:16:42.061 "params": { 00:16:42.061 "bdev_io_pool_size": 65535, 00:16:42.061 "bdev_io_cache_size": 256, 00:16:42.061 "bdev_auto_examine": true, 00:16:42.061 "iobuf_small_cache_size": 128, 00:16:42.061 "iobuf_large_cache_size": 16 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "bdev_raid_set_options", 00:16:42.061 "params": { 00:16:42.061 "process_window_size_kb": 1024, 00:16:42.061 "process_max_bandwidth_mb_sec": 0 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "bdev_iscsi_set_options", 00:16:42.061 "params": { 00:16:42.061 "timeout_sec": 30 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "bdev_nvme_set_options", 00:16:42.061 "params": { 00:16:42.061 "action_on_timeout": "none", 00:16:42.061 "timeout_us": 0, 00:16:42.061 "timeout_admin_us": 0, 00:16:42.061 "keep_alive_timeout_ms": 10000, 00:16:42.061 "arbitration_burst": 0, 00:16:42.061 "low_priority_weight": 0, 00:16:42.061 "medium_priority_weight": 0, 00:16:42.061 "high_priority_weight": 0, 00:16:42.061 "nvme_adminq_poll_period_us": 10000, 00:16:42.061 "nvme_ioq_poll_period_us": 0, 00:16:42.061 "io_queue_requests": 0, 00:16:42.061 "delay_cmd_submit": true, 00:16:42.061 "transport_retry_count": 4, 00:16:42.061 "bdev_retry_count": 3, 00:16:42.061 "transport_ack_timeout": 0, 00:16:42.061 "ctrlr_loss_timeout_sec": 0, 00:16:42.061 "reconnect_delay_sec": 0, 00:16:42.061 "fast_io_fail_timeout_sec": 0, 00:16:42.061 "disable_auto_failback": false, 00:16:42.061 "generate_uuids": false, 00:16:42.061 "transport_tos": 0, 00:16:42.061 "nvme_error_stat": false, 00:16:42.061 "rdma_srq_size": 0, 00:16:42.061 "io_path_stat": false, 00:16:42.061 "allow_accel_sequence": false, 00:16:42.061 "rdma_max_cq_size": 0, 00:16:42.061 "rdma_cm_event_timeout_ms": 0, 00:16:42.061 "dhchap_digests": [ 00:16:42.061 "sha256", 00:16:42.061 "sha384", 00:16:42.061 "sha512" 00:16:42.061 ], 00:16:42.061 "dhchap_dhgroups": [ 00:16:42.061 "null", 00:16:42.061 "ffdhe2048", 00:16:42.061 "ffdhe3072", 00:16:42.061 "ffdhe4096", 00:16:42.061 "ffdhe6144", 00:16:42.061 "ffdhe8192" 00:16:42.061 ] 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "bdev_nvme_set_hotplug", 00:16:42.061 "params": { 00:16:42.061 "period_us": 100000, 00:16:42.061 "enable": false 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "bdev_malloc_create", 00:16:42.061 "params": { 00:16:42.061 "name": "malloc0", 00:16:42.061 "num_blocks": 8192, 00:16:42.061 "block_size": 4096, 00:16:42.061 "physical_block_size": 4096, 00:16:42.061 "uuid": "2c8e5d84-1a40-4d3e-b1a5-5d3cd1518496", 00:16:42.061 "optimal_io_boundary": 0, 00:16:42.061 "md_size": 0, 00:16:42.061 "dif_type": 0, 00:16:42.061 "dif_is_head_of_md": false, 00:16:42.061 "dif_pi_format": 0 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "bdev_wait_for_examine" 00:16:42.061 } 00:16:42.061 ] 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "subsystem": "nbd", 00:16:42.061 "config": [] 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "subsystem": "scheduler", 00:16:42.061 "config": [ 00:16:42.061 { 00:16:42.061 "method": "framework_set_scheduler", 00:16:42.061 "params": { 00:16:42.061 "name": "static" 00:16:42.061 } 00:16:42.061 } 00:16:42.061 ] 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "subsystem": "nvmf", 00:16:42.061 "config": [ 00:16:42.061 { 00:16:42.061 "method": "nvmf_set_config", 00:16:42.061 "params": { 00:16:42.061 "discovery_filter": "match_any", 00:16:42.061 "admin_cmd_passthru": { 00:16:42.061 "identify_ctrlr": false 00:16:42.061 }, 00:16:42.061 "dhchap_digests": [ 00:16:42.061 "sha256", 00:16:42.061 "sha384", 00:16:42.061 "sha512" 00:16:42.061 ], 00:16:42.061 "dhchap_dhgroups": [ 00:16:42.061 "null", 00:16:42.061 "ffdhe2048", 00:16:42.061 "ffdhe3072", 00:16:42.061 "ffdhe4096", 00:16:42.061 "ffdhe6144", 00:16:42.061 "ffdhe8192" 00:16:42.061 ] 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "nvmf_set_max_subsystems", 00:16:42.061 "params": { 00:16:42.061 "max_subsystems": 1024 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "nvmf_set_crdt", 00:16:42.061 "params": { 00:16:42.061 "crdt1": 0, 00:16:42.061 "crdt2": 0, 00:16:42.061 "crdt3": 0 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "nvmf_create_transport", 00:16:42.061 "params": { 00:16:42.061 "trtype": "TCP", 00:16:42.061 "max_queue_depth": 128, 00:16:42.061 "max_io_qpairs_per_ctrlr": 127, 00:16:42.061 "in_capsule_data_size": 4096, 00:16:42.061 "max_io_size": 131072, 00:16:42.061 "io_unit_size": 131072, 00:16:42.061 "max_aq_depth": 128, 00:16:42.061 "num_shared_buffers": 511, 00:16:42.061 "buf_cache_size": 4294967295, 00:16:42.061 "dif_insert_or_strip": false, 00:16:42.061 "zcopy": false, 00:16:42.061 "c2h_success": false, 00:16:42.061 "sock_priority": 0, 00:16:42.061 "abort_timeout_sec": 1, 00:16:42.061 "ack_timeout": 0, 00:16:42.061 "data_wr_pool_size": 0 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "nvmf_create_subsystem", 00:16:42.061 "params": { 00:16:42.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.061 "allow_any_host": false, 00:16:42.061 "serial_number": "SPDK00000000000001", 00:16:42.061 "model_number": "SPDK bdev Controller", 00:16:42.061 "max_namespaces": 10, 00:16:42.061 "min_cntlid": 1, 00:16:42.061 "max_cntlid": 65519, 00:16:42.061 "ana_reporting": false 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "nvmf_subsystem_add_host", 00:16:42.061 "params": { 00:16:42.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.061 "host": "nqn.2016-06.io.spdk:host1", 00:16:42.061 "psk": "key0" 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "nvmf_subsystem_add_ns", 00:16:42.061 "params": { 00:16:42.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.061 "namespace": { 00:16:42.061 "nsid": 1, 00:16:42.061 "bdev_name": "malloc0", 00:16:42.061 "nguid": "2C8E5D841A404D3EB1A55D3CD1518496", 00:16:42.061 "uuid": "2c8e5d84-1a40-4d3e-b1a5-5d3cd1518496", 00:16:42.061 "no_auto_visible": false 00:16:42.061 } 00:16:42.061 } 00:16:42.061 }, 00:16:42.061 { 00:16:42.061 "method": "nvmf_subsystem_add_listener", 00:16:42.061 "params": { 00:16:42.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.061 "listen_address": { 00:16:42.061 "trtype": "TCP", 00:16:42.061 "adrfam": "IPv4", 00:16:42.061 "traddr": "10.0.0.2", 00:16:42.061 "trsvcid": "4420" 00:16:42.061 }, 00:16:42.061 "secure_channel": true 00:16:42.061 } 00:16:42.061 } 00:16:42.061 ] 00:16:42.061 } 00:16:42.061 ] 00:16:42.061 }' 00:16:42.061 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3728548 00:16:42.061 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3728548 00:16:42.062 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3728548 ']' 00:16:42.062 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.062 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.062 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.062 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.062 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:16:42.062 19:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.062 [2024-11-26 19:23:15.711636] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:42.062 [2024-11-26 19:23:15.711692] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.062 [2024-11-26 19:23:15.782853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.062 [2024-11-26 19:23:15.812496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.062 [2024-11-26 19:23:15.812526] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.062 [2024-11-26 19:23:15.812532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.062 [2024-11-26 19:23:15.812537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.062 [2024-11-26 19:23:15.812541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.062 [2024-11-26 19:23:15.813023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.322 [2024-11-26 19:23:16.007293] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.322 [2024-11-26 19:23:16.039329] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:42.322 [2024-11-26 19:23:16.039539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3728644 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3728644 /var/tmp/bdevperf.sock 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3728644 ']' 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:42.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:16:42.891 19:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:16:42.891 "subsystems": [ 00:16:42.891 { 00:16:42.891 "subsystem": "keyring", 00:16:42.891 "config": [ 00:16:42.891 { 00:16:42.891 "method": "keyring_file_add_key", 00:16:42.891 "params": { 00:16:42.891 "name": "key0", 00:16:42.891 "path": "/tmp/tmp.rKxa3CyWst" 00:16:42.891 } 00:16:42.891 } 00:16:42.891 ] 00:16:42.891 }, 00:16:42.891 { 00:16:42.891 "subsystem": "iobuf", 00:16:42.891 "config": [ 00:16:42.891 { 00:16:42.891 "method": "iobuf_set_options", 00:16:42.891 "params": { 00:16:42.891 "small_pool_count": 8192, 00:16:42.891 "large_pool_count": 1024, 00:16:42.891 "small_bufsize": 8192, 00:16:42.891 "large_bufsize": 135168, 00:16:42.891 "enable_numa": false 00:16:42.891 } 00:16:42.891 } 00:16:42.891 ] 00:16:42.891 }, 00:16:42.891 { 00:16:42.891 "subsystem": "sock", 00:16:42.891 "config": [ 00:16:42.891 { 00:16:42.891 "method": "sock_set_default_impl", 00:16:42.891 "params": { 00:16:42.891 "impl_name": "posix" 00:16:42.891 } 00:16:42.891 }, 00:16:42.891 { 00:16:42.891 "method": "sock_impl_set_options", 00:16:42.891 "params": { 00:16:42.891 "impl_name": "ssl", 00:16:42.891 "recv_buf_size": 4096, 00:16:42.891 "send_buf_size": 4096, 00:16:42.891 "enable_recv_pipe": true, 00:16:42.891 "enable_quickack": false, 00:16:42.891 "enable_placement_id": 0, 00:16:42.891 "enable_zerocopy_send_server": true, 00:16:42.891 "enable_zerocopy_send_client": false, 00:16:42.891 "zerocopy_threshold": 0, 00:16:42.891 "tls_version": 0, 00:16:42.891 "enable_ktls": false 00:16:42.891 } 00:16:42.891 }, 00:16:42.891 { 00:16:42.892 "method": "sock_impl_set_options", 00:16:42.892 "params": { 00:16:42.892 "impl_name": "posix", 00:16:42.892 "recv_buf_size": 2097152, 00:16:42.892 "send_buf_size": 2097152, 00:16:42.892 "enable_recv_pipe": true, 00:16:42.892 "enable_quickack": false, 00:16:42.892 "enable_placement_id": 0, 00:16:42.892 "enable_zerocopy_send_server": true, 00:16:42.892 "enable_zerocopy_send_client": false, 00:16:42.892 "zerocopy_threshold": 0, 00:16:42.892 "tls_version": 0, 00:16:42.892 "enable_ktls": false 00:16:42.892 } 00:16:42.892 } 00:16:42.892 ] 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "subsystem": "vmd", 00:16:42.892 "config": [] 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "subsystem": "accel", 00:16:42.892 "config": [ 00:16:42.892 { 00:16:42.892 "method": "accel_set_options", 00:16:42.892 "params": { 00:16:42.892 "small_cache_size": 128, 00:16:42.892 "large_cache_size": 16, 00:16:42.892 "task_count": 2048, 00:16:42.892 "sequence_count": 2048, 00:16:42.892 "buf_count": 2048 00:16:42.892 } 00:16:42.892 } 00:16:42.892 ] 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "subsystem": "bdev", 00:16:42.892 "config": [ 00:16:42.892 { 00:16:42.892 "method": "bdev_set_options", 00:16:42.892 "params": { 00:16:42.892 "bdev_io_pool_size": 65535, 00:16:42.892 "bdev_io_cache_size": 256, 00:16:42.892 "bdev_auto_examine": true, 00:16:42.892 "iobuf_small_cache_size": 128, 00:16:42.892 "iobuf_large_cache_size": 16 00:16:42.892 } 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "method": "bdev_raid_set_options", 00:16:42.892 "params": { 00:16:42.892 "process_window_size_kb": 1024, 00:16:42.892 "process_max_bandwidth_mb_sec": 0 00:16:42.892 } 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "method": "bdev_iscsi_set_options", 00:16:42.892 "params": { 00:16:42.892 "timeout_sec": 30 00:16:42.892 } 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "method": "bdev_nvme_set_options", 00:16:42.892 "params": { 00:16:42.892 "action_on_timeout": "none", 00:16:42.892 "timeout_us": 0, 00:16:42.892 "timeout_admin_us": 0, 00:16:42.892 "keep_alive_timeout_ms": 10000, 00:16:42.892 "arbitration_burst": 0, 00:16:42.892 "low_priority_weight": 0, 00:16:42.892 "medium_priority_weight": 0, 00:16:42.892 "high_priority_weight": 0, 00:16:42.892 "nvme_adminq_poll_period_us": 10000, 00:16:42.892 "nvme_ioq_poll_period_us": 0, 00:16:42.892 "io_queue_requests": 512, 00:16:42.892 "delay_cmd_submit": true, 00:16:42.892 "transport_retry_count": 4, 00:16:42.892 "bdev_retry_count": 3, 00:16:42.892 "transport_ack_timeout": 0, 00:16:42.892 "ctrlr_loss_timeout_sec": 0, 00:16:42.892 "reconnect_delay_sec": 0, 00:16:42.892 "fast_io_fail_timeout_sec": 0, 00:16:42.892 "disable_auto_failback": false, 00:16:42.892 "generate_uuids": false, 00:16:42.892 "transport_tos": 0, 00:16:42.892 "nvme_error_stat": false, 00:16:42.892 "rdma_srq_size": 0, 00:16:42.892 "io_path_stat": false, 00:16:42.892 "allow_accel_sequence": false, 00:16:42.892 "rdma_max_cq_size": 0, 00:16:42.892 "rdma_cm_event_timeout_ms": 0, 00:16:42.892 "dhchap_digests": [ 00:16:42.892 "sha256", 00:16:42.892 "sha384", 00:16:42.892 "sha512" 00:16:42.892 ], 00:16:42.892 "dhchap_dhgroups": [ 00:16:42.892 "null", 00:16:42.892 "ffdhe2048", 00:16:42.892 "ffdhe3072", 00:16:42.892 "ffdhe4096", 00:16:42.892 "ffdhe6144", 00:16:42.892 "ffdhe8192" 00:16:42.892 ] 00:16:42.892 } 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "method": "bdev_nvme_attach_controller", 00:16:42.892 "params": { 00:16:42.892 "name": "TLSTEST", 00:16:42.892 "trtype": "TCP", 00:16:42.892 "adrfam": "IPv4", 00:16:42.892 "traddr": "10.0.0.2", 00:16:42.892 "trsvcid": "4420", 00:16:42.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.892 "prchk_reftag": false, 00:16:42.892 "prchk_guard": false, 00:16:42.892 "ctrlr_loss_timeout_sec": 0, 00:16:42.892 "reconnect_delay_sec": 0, 00:16:42.892 "fast_io_fail_timeout_sec": 0, 00:16:42.892 "psk": "key0", 00:16:42.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.892 "hdgst": false, 00:16:42.892 "ddgst": false, 00:16:42.892 "multipath": "multipath" 00:16:42.892 } 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "method": "bdev_nvme_set_hotplug", 00:16:42.892 "params": { 00:16:42.892 "period_us": 100000, 00:16:42.892 "enable": false 00:16:42.892 } 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "method": "bdev_wait_for_examine" 00:16:42.892 } 00:16:42.892 ] 00:16:42.892 }, 00:16:42.892 { 00:16:42.892 "subsystem": "nbd", 00:16:42.892 "config": [] 00:16:42.892 } 00:16:42.892 ] 00:16:42.892 }' 00:16:42.892 [2024-11-26 19:23:16.541936] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:42.892 [2024-11-26 19:23:16.541986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3728644 ] 00:16:42.892 [2024-11-26 19:23:16.606326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.892 [2024-11-26 19:23:16.635160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.151 [2024-11-26 19:23:16.770279] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:43.721 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.721 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:43.721 19:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:43.721 Running I/O for 10 seconds... 00:16:45.601 4161.00 IOPS, 16.25 MiB/s [2024-11-26T18:23:20.406Z] 4765.00 IOPS, 18.61 MiB/s [2024-11-26T18:23:21.789Z] 4727.67 IOPS, 18.47 MiB/s [2024-11-26T18:23:22.727Z] 4618.25 IOPS, 18.04 MiB/s [2024-11-26T18:23:23.776Z] 4653.40 IOPS, 18.18 MiB/s [2024-11-26T18:23:24.717Z] 4753.33 IOPS, 18.57 MiB/s [2024-11-26T18:23:25.655Z] 4686.43 IOPS, 18.31 MiB/s [2024-11-26T18:23:26.591Z] 4656.00 IOPS, 18.19 MiB/s [2024-11-26T18:23:27.529Z] 4610.33 IOPS, 18.01 MiB/s [2024-11-26T18:23:27.529Z] 4533.00 IOPS, 17.71 MiB/s 00:16:53.664 Latency(us) 00:16:53.664 [2024-11-26T18:23:27.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.664 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:53.664 Verification LBA range: start 0x0 length 0x2000 00:16:53.664 TLSTESTn1 : 10.02 4537.70 17.73 0.00 0.00 28170.09 5461.33 79080.11 00:16:53.664 [2024-11-26T18:23:27.529Z] =================================================================================================================== 00:16:53.664 [2024-11-26T18:23:27.529Z] Total : 4537.70 17.73 0.00 0.00 28170.09 5461.33 79080.11 00:16:53.664 { 00:16:53.664 "results": [ 00:16:53.664 { 00:16:53.664 "job": "TLSTESTn1", 00:16:53.664 "core_mask": "0x4", 00:16:53.664 "workload": "verify", 00:16:53.664 "status": "finished", 00:16:53.664 "verify_range": { 00:16:53.664 "start": 0, 00:16:53.664 "length": 8192 00:16:53.664 }, 00:16:53.664 "queue_depth": 128, 00:16:53.664 "io_size": 4096, 00:16:53.664 "runtime": 10.017845, 00:16:53.664 "iops": 4537.702469942388, 00:16:53.664 "mibps": 17.725400273212454, 00:16:53.664 "io_failed": 0, 00:16:53.664 "io_timeout": 0, 00:16:53.664 "avg_latency_us": 28170.087872468357, 00:16:53.664 "min_latency_us": 5461.333333333333, 00:16:53.664 "max_latency_us": 79080.10666666667 00:16:53.664 } 00:16:53.664 ], 00:16:53.664 "core_count": 1 00:16:53.664 } 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3728644 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3728644 ']' 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3728644 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3728644 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3728644' 00:16:53.664 killing process with pid 3728644 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3728644 00:16:53.664 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.664 00:16:53.664 Latency(us) 00:16:53.664 [2024-11-26T18:23:27.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.664 [2024-11-26T18:23:27.529Z] =================================================================================================================== 00:16:53.664 [2024-11-26T18:23:27.529Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.664 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3728644 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3728548 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3728548 ']' 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3728548 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3728548 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3728548' 00:16:53.924 killing process with pid 3728548 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3728548 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3728548 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3730995 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3730995 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3730995 ']' 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:53.924 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:54.183 [2024-11-26 19:23:27.791109] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:54.183 [2024-11-26 19:23:27.791163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.183 [2024-11-26 19:23:27.861091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.183 [2024-11-26 19:23:27.889649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.183 [2024-11-26 19:23:27.889676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.183 [2024-11-26 19:23:27.889682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.183 [2024-11-26 19:23:27.889687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.183 [2024-11-26 19:23:27.889691] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.183 [2024-11-26 19:23:27.890174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.rKxa3CyWst 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.rKxa3CyWst 00:16:54.183 19:23:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:54.443 [2024-11-26 19:23:28.129740] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.443 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:54.443 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:54.702 [2024-11-26 19:23:28.442508] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:54.702 [2024-11-26 19:23:28.442714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.702 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:54.962 malloc0 00:16:54.962 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:54.962 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:55.222 19:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3731347 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3731347 /var/tmp/bdevperf.sock 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3731347 ']' 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:55.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:55.485 [2024-11-26 19:23:29.118370] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:55.485 [2024-11-26 19:23:29.118424] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731347 ] 00:16:55.485 [2024-11-26 19:23:29.182997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.485 [2024-11-26 19:23:29.213165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.485 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.486 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:55.486 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:55.748 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:55.748 [2024-11-26 19:23:29.580499] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:56.008 nvme0n1 00:16:56.008 19:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:56.008 Running I/O for 1 seconds... 00:16:56.948 4271.00 IOPS, 16.68 MiB/s 00:16:56.948 Latency(us) 00:16:56.948 [2024-11-26T18:23:30.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.948 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:56.948 Verification LBA range: start 0x0 length 0x2000 00:16:56.948 nvme0n1 : 1.02 4307.73 16.83 0.00 0.00 29465.58 5625.17 103546.88 00:16:56.948 [2024-11-26T18:23:30.813Z] =================================================================================================================== 00:16:56.948 [2024-11-26T18:23:30.813Z] Total : 4307.73 16.83 0.00 0.00 29465.58 5625.17 103546.88 00:16:56.948 { 00:16:56.948 "results": [ 00:16:56.948 { 00:16:56.948 "job": "nvme0n1", 00:16:56.948 "core_mask": "0x2", 00:16:56.948 "workload": "verify", 00:16:56.948 "status": "finished", 00:16:56.948 "verify_range": { 00:16:56.948 "start": 0, 00:16:56.948 "length": 8192 00:16:56.948 }, 00:16:56.948 "queue_depth": 128, 00:16:56.948 "io_size": 4096, 00:16:56.948 "runtime": 1.02142, 00:16:56.948 "iops": 4307.728456462572, 00:16:56.948 "mibps": 16.82706428305692, 00:16:56.948 "io_failed": 0, 00:16:56.948 "io_timeout": 0, 00:16:56.948 "avg_latency_us": 29465.578278787885, 00:16:56.948 "min_latency_us": 5625.173333333333, 00:16:56.948 "max_latency_us": 103546.88 00:16:56.948 } 00:16:56.948 ], 00:16:56.948 "core_count": 1 00:16:56.948 } 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3731347 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3731347 ']' 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3731347 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3731347 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3731347' 00:16:56.948 killing process with pid 3731347 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3731347 00:16:56.948 Received shutdown signal, test time was about 1.000000 seconds 00:16:56.948 00:16:56.948 Latency(us) 00:16:56.948 [2024-11-26T18:23:30.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.948 [2024-11-26T18:23:30.813Z] =================================================================================================================== 00:16:56.948 [2024-11-26T18:23:30.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.948 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3731347 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3730995 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3730995 ']' 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3730995 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3730995 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3730995' 00:16:57.208 killing process with pid 3730995 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3730995 00:16:57.208 19:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3730995 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3731731 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3731731 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3731731 ']' 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.208 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:57.468 [2024-11-26 19:23:31.098259] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:57.468 [2024-11-26 19:23:31.098304] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.468 [2024-11-26 19:23:31.158925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.468 [2024-11-26 19:23:31.187705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.468 [2024-11-26 19:23:31.187732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.468 [2024-11-26 19:23:31.187738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.468 [2024-11-26 19:23:31.187743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.468 [2024-11-26 19:23:31.187747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.468 [2024-11-26 19:23:31.188206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.468 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.468 [2024-11-26 19:23:31.291882] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.468 malloc0 00:16:57.468 [2024-11-26 19:23:31.317857] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:57.468 [2024-11-26 19:23:31.318060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3731934 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3731934 /var/tmp/bdevperf.sock 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3731934 ']' 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:16:57.728 [2024-11-26 19:23:31.379949] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:57.728 [2024-11-26 19:23:31.379997] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3731934 ] 00:16:57.728 [2024-11-26 19:23:31.443833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.728 [2024-11-26 19:23:31.473751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:57.728 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.rKxa3CyWst 00:16:57.988 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:57.988 [2024-11-26 19:23:31.837112] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:58.248 nvme0n1 00:16:58.248 19:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:58.248 Running I/O for 1 seconds... 00:16:59.189 3727.00 IOPS, 14.56 MiB/s 00:16:59.189 Latency(us) 00:16:59.189 [2024-11-26T18:23:33.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.189 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:59.189 Verification LBA range: start 0x0 length 0x2000 00:16:59.189 nvme0n1 : 1.04 3718.27 14.52 0.00 0.00 33908.87 5133.65 79953.92 00:16:59.189 [2024-11-26T18:23:33.054Z] =================================================================================================================== 00:16:59.189 [2024-11-26T18:23:33.054Z] Total : 3718.27 14.52 0.00 0.00 33908.87 5133.65 79953.92 00:16:59.189 { 00:16:59.189 "results": [ 00:16:59.189 { 00:16:59.189 "job": "nvme0n1", 00:16:59.189 "core_mask": "0x2", 00:16:59.189 "workload": "verify", 00:16:59.189 "status": "finished", 00:16:59.189 "verify_range": { 00:16:59.189 "start": 0, 00:16:59.189 "length": 8192 00:16:59.189 }, 00:16:59.189 "queue_depth": 128, 00:16:59.189 "io_size": 4096, 00:16:59.189 "runtime": 1.036773, 00:16:59.189 "iops": 3718.268126195416, 00:16:59.189 "mibps": 14.524484867950843, 00:16:59.189 "io_failed": 0, 00:16:59.189 "io_timeout": 0, 00:16:59.189 "avg_latency_us": 33908.87324167747, 00:16:59.189 "min_latency_us": 5133.653333333334, 00:16:59.189 "max_latency_us": 79953.92 00:16:59.189 } 00:16:59.189 ], 00:16:59.189 "core_count": 1 00:16:59.189 } 00:16:59.189 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:16:59.189 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.189 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.448 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.448 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:16:59.448 "subsystems": [ 00:16:59.448 { 00:16:59.448 "subsystem": "keyring", 00:16:59.448 "config": [ 00:16:59.448 { 00:16:59.448 "method": "keyring_file_add_key", 00:16:59.448 "params": { 00:16:59.448 "name": "key0", 00:16:59.448 "path": "/tmp/tmp.rKxa3CyWst" 00:16:59.448 } 00:16:59.448 } 00:16:59.448 ] 00:16:59.448 }, 00:16:59.448 { 00:16:59.448 "subsystem": "iobuf", 00:16:59.448 "config": [ 00:16:59.448 { 00:16:59.448 "method": "iobuf_set_options", 00:16:59.448 "params": { 00:16:59.448 "small_pool_count": 8192, 00:16:59.448 "large_pool_count": 1024, 00:16:59.448 "small_bufsize": 8192, 00:16:59.448 "large_bufsize": 135168, 00:16:59.448 "enable_numa": false 00:16:59.448 } 00:16:59.448 } 00:16:59.448 ] 00:16:59.448 }, 00:16:59.448 { 00:16:59.448 "subsystem": "sock", 00:16:59.448 "config": [ 00:16:59.448 { 00:16:59.448 "method": "sock_set_default_impl", 00:16:59.448 "params": { 00:16:59.448 "impl_name": "posix" 00:16:59.448 } 00:16:59.448 }, 00:16:59.448 { 00:16:59.448 "method": "sock_impl_set_options", 00:16:59.448 "params": { 00:16:59.448 "impl_name": "ssl", 00:16:59.448 "recv_buf_size": 4096, 00:16:59.448 "send_buf_size": 4096, 00:16:59.448 "enable_recv_pipe": true, 00:16:59.448 "enable_quickack": false, 00:16:59.448 "enable_placement_id": 0, 00:16:59.448 "enable_zerocopy_send_server": true, 00:16:59.448 "enable_zerocopy_send_client": false, 00:16:59.448 "zerocopy_threshold": 0, 00:16:59.448 "tls_version": 0, 00:16:59.449 "enable_ktls": false 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "sock_impl_set_options", 00:16:59.449 "params": { 00:16:59.449 "impl_name": "posix", 00:16:59.449 "recv_buf_size": 2097152, 00:16:59.449 "send_buf_size": 2097152, 00:16:59.449 "enable_recv_pipe": true, 00:16:59.449 "enable_quickack": false, 00:16:59.449 "enable_placement_id": 0, 00:16:59.449 "enable_zerocopy_send_server": true, 00:16:59.449 "enable_zerocopy_send_client": false, 00:16:59.449 "zerocopy_threshold": 0, 00:16:59.449 "tls_version": 0, 00:16:59.449 "enable_ktls": false 00:16:59.449 } 00:16:59.449 } 00:16:59.449 ] 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "subsystem": "vmd", 00:16:59.449 "config": [] 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "subsystem": "accel", 00:16:59.449 "config": [ 00:16:59.449 { 00:16:59.449 "method": "accel_set_options", 00:16:59.449 "params": { 00:16:59.449 "small_cache_size": 128, 00:16:59.449 "large_cache_size": 16, 00:16:59.449 "task_count": 2048, 00:16:59.449 "sequence_count": 2048, 00:16:59.449 "buf_count": 2048 00:16:59.449 } 00:16:59.449 } 00:16:59.449 ] 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "subsystem": "bdev", 00:16:59.449 "config": [ 00:16:59.449 { 00:16:59.449 "method": "bdev_set_options", 00:16:59.449 "params": { 00:16:59.449 "bdev_io_pool_size": 65535, 00:16:59.449 "bdev_io_cache_size": 256, 00:16:59.449 "bdev_auto_examine": true, 00:16:59.449 "iobuf_small_cache_size": 128, 00:16:59.449 "iobuf_large_cache_size": 16 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "bdev_raid_set_options", 00:16:59.449 "params": { 00:16:59.449 "process_window_size_kb": 1024, 00:16:59.449 "process_max_bandwidth_mb_sec": 0 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "bdev_iscsi_set_options", 00:16:59.449 "params": { 00:16:59.449 "timeout_sec": 30 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "bdev_nvme_set_options", 00:16:59.449 "params": { 00:16:59.449 "action_on_timeout": "none", 00:16:59.449 "timeout_us": 0, 00:16:59.449 "timeout_admin_us": 0, 00:16:59.449 "keep_alive_timeout_ms": 10000, 00:16:59.449 "arbitration_burst": 0, 00:16:59.449 "low_priority_weight": 0, 00:16:59.449 "medium_priority_weight": 0, 00:16:59.449 "high_priority_weight": 0, 00:16:59.449 "nvme_adminq_poll_period_us": 10000, 00:16:59.449 "nvme_ioq_poll_period_us": 0, 00:16:59.449 "io_queue_requests": 0, 00:16:59.449 "delay_cmd_submit": true, 00:16:59.449 "transport_retry_count": 4, 00:16:59.449 "bdev_retry_count": 3, 00:16:59.449 "transport_ack_timeout": 0, 00:16:59.449 "ctrlr_loss_timeout_sec": 0, 00:16:59.449 "reconnect_delay_sec": 0, 00:16:59.449 "fast_io_fail_timeout_sec": 0, 00:16:59.449 "disable_auto_failback": false, 00:16:59.449 "generate_uuids": false, 00:16:59.449 "transport_tos": 0, 00:16:59.449 "nvme_error_stat": false, 00:16:59.449 "rdma_srq_size": 0, 00:16:59.449 "io_path_stat": false, 00:16:59.449 "allow_accel_sequence": false, 00:16:59.449 "rdma_max_cq_size": 0, 00:16:59.449 "rdma_cm_event_timeout_ms": 0, 00:16:59.449 "dhchap_digests": [ 00:16:59.449 "sha256", 00:16:59.449 "sha384", 00:16:59.449 "sha512" 00:16:59.449 ], 00:16:59.449 "dhchap_dhgroups": [ 00:16:59.449 "null", 00:16:59.449 "ffdhe2048", 00:16:59.449 "ffdhe3072", 00:16:59.449 "ffdhe4096", 00:16:59.449 "ffdhe6144", 00:16:59.449 "ffdhe8192" 00:16:59.449 ] 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "bdev_nvme_set_hotplug", 00:16:59.449 "params": { 00:16:59.449 "period_us": 100000, 00:16:59.449 "enable": false 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "bdev_malloc_create", 00:16:59.449 "params": { 00:16:59.449 "name": "malloc0", 00:16:59.449 "num_blocks": 8192, 00:16:59.449 "block_size": 4096, 00:16:59.449 "physical_block_size": 4096, 00:16:59.449 "uuid": "036ce140-505c-44d8-add1-b6bce0605467", 00:16:59.449 "optimal_io_boundary": 0, 00:16:59.449 "md_size": 0, 00:16:59.449 "dif_type": 0, 00:16:59.449 "dif_is_head_of_md": false, 00:16:59.449 "dif_pi_format": 0 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "bdev_wait_for_examine" 00:16:59.449 } 00:16:59.449 ] 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "subsystem": "nbd", 00:16:59.449 "config": [] 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "subsystem": "scheduler", 00:16:59.449 "config": [ 00:16:59.449 { 00:16:59.449 "method": "framework_set_scheduler", 00:16:59.449 "params": { 00:16:59.449 "name": "static" 00:16:59.449 } 00:16:59.449 } 00:16:59.449 ] 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "subsystem": "nvmf", 00:16:59.449 "config": [ 00:16:59.449 { 00:16:59.449 "method": "nvmf_set_config", 00:16:59.449 "params": { 00:16:59.449 "discovery_filter": "match_any", 00:16:59.449 "admin_cmd_passthru": { 00:16:59.449 "identify_ctrlr": false 00:16:59.449 }, 00:16:59.449 "dhchap_digests": [ 00:16:59.449 "sha256", 00:16:59.449 "sha384", 00:16:59.449 "sha512" 00:16:59.449 ], 00:16:59.449 "dhchap_dhgroups": [ 00:16:59.449 "null", 00:16:59.449 "ffdhe2048", 00:16:59.449 "ffdhe3072", 00:16:59.449 "ffdhe4096", 00:16:59.449 "ffdhe6144", 00:16:59.449 "ffdhe8192" 00:16:59.449 ] 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "nvmf_set_max_subsystems", 00:16:59.449 "params": { 00:16:59.449 "max_subsystems": 1024 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "nvmf_set_crdt", 00:16:59.449 "params": { 00:16:59.449 "crdt1": 0, 00:16:59.449 "crdt2": 0, 00:16:59.449 "crdt3": 0 00:16:59.449 } 00:16:59.449 }, 00:16:59.449 { 00:16:59.449 "method": "nvmf_create_transport", 00:16:59.449 "params": { 00:16:59.449 "trtype": "TCP", 00:16:59.450 "max_queue_depth": 128, 00:16:59.450 "max_io_qpairs_per_ctrlr": 127, 00:16:59.450 "in_capsule_data_size": 4096, 00:16:59.450 "max_io_size": 131072, 00:16:59.450 "io_unit_size": 131072, 00:16:59.450 "max_aq_depth": 128, 00:16:59.450 "num_shared_buffers": 511, 00:16:59.450 "buf_cache_size": 4294967295, 00:16:59.450 "dif_insert_or_strip": false, 00:16:59.450 "zcopy": false, 00:16:59.450 "c2h_success": false, 00:16:59.450 "sock_priority": 0, 00:16:59.450 "abort_timeout_sec": 1, 00:16:59.450 "ack_timeout": 0, 00:16:59.450 "data_wr_pool_size": 0 00:16:59.450 } 00:16:59.450 }, 00:16:59.450 { 00:16:59.450 "method": "nvmf_create_subsystem", 00:16:59.450 "params": { 00:16:59.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.450 "allow_any_host": false, 00:16:59.450 "serial_number": "00000000000000000000", 00:16:59.450 "model_number": "SPDK bdev Controller", 00:16:59.450 "max_namespaces": 32, 00:16:59.450 "min_cntlid": 1, 00:16:59.450 "max_cntlid": 65519, 00:16:59.450 "ana_reporting": false 00:16:59.450 } 00:16:59.450 }, 00:16:59.450 { 00:16:59.450 "method": "nvmf_subsystem_add_host", 00:16:59.450 "params": { 00:16:59.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.450 "host": "nqn.2016-06.io.spdk:host1", 00:16:59.450 "psk": "key0" 00:16:59.450 } 00:16:59.450 }, 00:16:59.450 { 00:16:59.450 "method": "nvmf_subsystem_add_ns", 00:16:59.450 "params": { 00:16:59.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.450 "namespace": { 00:16:59.450 "nsid": 1, 00:16:59.450 "bdev_name": "malloc0", 00:16:59.450 "nguid": "036CE140505C44D8ADD1B6BCE0605467", 00:16:59.450 "uuid": "036ce140-505c-44d8-add1-b6bce0605467", 00:16:59.450 "no_auto_visible": false 00:16:59.450 } 00:16:59.450 } 00:16:59.450 }, 00:16:59.450 { 00:16:59.450 "method": "nvmf_subsystem_add_listener", 00:16:59.450 "params": { 00:16:59.450 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.450 "listen_address": { 00:16:59.450 "trtype": "TCP", 00:16:59.450 "adrfam": "IPv4", 00:16:59.450 "traddr": "10.0.0.2", 00:16:59.450 "trsvcid": "4420" 00:16:59.450 }, 00:16:59.450 "secure_channel": false, 00:16:59.450 "sock_impl": "ssl" 00:16:59.450 } 00:16:59.450 } 00:16:59.450 ] 00:16:59.450 } 00:16:59.450 ] 00:16:59.450 }' 00:16:59.450 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:16:59.710 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:16:59.710 "subsystems": [ 00:16:59.710 { 00:16:59.710 "subsystem": "keyring", 00:16:59.710 "config": [ 00:16:59.710 { 00:16:59.710 "method": "keyring_file_add_key", 00:16:59.710 "params": { 00:16:59.710 "name": "key0", 00:16:59.710 "path": "/tmp/tmp.rKxa3CyWst" 00:16:59.710 } 00:16:59.710 } 00:16:59.710 ] 00:16:59.710 }, 00:16:59.710 { 00:16:59.710 "subsystem": "iobuf", 00:16:59.710 "config": [ 00:16:59.710 { 00:16:59.710 "method": "iobuf_set_options", 00:16:59.710 "params": { 00:16:59.710 "small_pool_count": 8192, 00:16:59.710 "large_pool_count": 1024, 00:16:59.710 "small_bufsize": 8192, 00:16:59.710 "large_bufsize": 135168, 00:16:59.710 "enable_numa": false 00:16:59.710 } 00:16:59.710 } 00:16:59.710 ] 00:16:59.710 }, 00:16:59.710 { 00:16:59.710 "subsystem": "sock", 00:16:59.710 "config": [ 00:16:59.710 { 00:16:59.710 "method": "sock_set_default_impl", 00:16:59.710 "params": { 00:16:59.710 "impl_name": "posix" 00:16:59.710 } 00:16:59.710 }, 00:16:59.710 { 00:16:59.710 "method": "sock_impl_set_options", 00:16:59.710 "params": { 00:16:59.710 "impl_name": "ssl", 00:16:59.710 "recv_buf_size": 4096, 00:16:59.710 "send_buf_size": 4096, 00:16:59.710 "enable_recv_pipe": true, 00:16:59.710 "enable_quickack": false, 00:16:59.710 "enable_placement_id": 0, 00:16:59.710 "enable_zerocopy_send_server": true, 00:16:59.710 "enable_zerocopy_send_client": false, 00:16:59.710 "zerocopy_threshold": 0, 00:16:59.710 "tls_version": 0, 00:16:59.710 "enable_ktls": false 00:16:59.710 } 00:16:59.710 }, 00:16:59.710 { 00:16:59.710 "method": "sock_impl_set_options", 00:16:59.710 "params": { 00:16:59.710 "impl_name": "posix", 00:16:59.710 "recv_buf_size": 2097152, 00:16:59.710 "send_buf_size": 2097152, 00:16:59.710 "enable_recv_pipe": true, 00:16:59.710 "enable_quickack": false, 00:16:59.710 "enable_placement_id": 0, 00:16:59.710 "enable_zerocopy_send_server": true, 00:16:59.710 "enable_zerocopy_send_client": false, 00:16:59.710 "zerocopy_threshold": 0, 00:16:59.710 "tls_version": 0, 00:16:59.710 "enable_ktls": false 00:16:59.710 } 00:16:59.710 } 00:16:59.710 ] 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "subsystem": "vmd", 00:16:59.711 "config": [] 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "subsystem": "accel", 00:16:59.711 "config": [ 00:16:59.711 { 00:16:59.711 "method": "accel_set_options", 00:16:59.711 "params": { 00:16:59.711 "small_cache_size": 128, 00:16:59.711 "large_cache_size": 16, 00:16:59.711 "task_count": 2048, 00:16:59.711 "sequence_count": 2048, 00:16:59.711 "buf_count": 2048 00:16:59.711 } 00:16:59.711 } 00:16:59.711 ] 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "subsystem": "bdev", 00:16:59.711 "config": [ 00:16:59.711 { 00:16:59.711 "method": "bdev_set_options", 00:16:59.711 "params": { 00:16:59.711 "bdev_io_pool_size": 65535, 00:16:59.711 "bdev_io_cache_size": 256, 00:16:59.711 "bdev_auto_examine": true, 00:16:59.711 "iobuf_small_cache_size": 128, 00:16:59.711 "iobuf_large_cache_size": 16 00:16:59.711 } 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "method": "bdev_raid_set_options", 00:16:59.711 "params": { 00:16:59.711 "process_window_size_kb": 1024, 00:16:59.711 "process_max_bandwidth_mb_sec": 0 00:16:59.711 } 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "method": "bdev_iscsi_set_options", 00:16:59.711 "params": { 00:16:59.711 "timeout_sec": 30 00:16:59.711 } 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "method": "bdev_nvme_set_options", 00:16:59.711 "params": { 00:16:59.711 "action_on_timeout": "none", 00:16:59.711 "timeout_us": 0, 00:16:59.711 "timeout_admin_us": 0, 00:16:59.711 "keep_alive_timeout_ms": 10000, 00:16:59.711 "arbitration_burst": 0, 00:16:59.711 "low_priority_weight": 0, 00:16:59.711 "medium_priority_weight": 0, 00:16:59.711 "high_priority_weight": 0, 00:16:59.711 "nvme_adminq_poll_period_us": 10000, 00:16:59.711 "nvme_ioq_poll_period_us": 0, 00:16:59.711 "io_queue_requests": 512, 00:16:59.711 "delay_cmd_submit": true, 00:16:59.711 "transport_retry_count": 4, 00:16:59.711 "bdev_retry_count": 3, 00:16:59.711 "transport_ack_timeout": 0, 00:16:59.711 "ctrlr_loss_timeout_sec": 0, 00:16:59.711 "reconnect_delay_sec": 0, 00:16:59.711 "fast_io_fail_timeout_sec": 0, 00:16:59.711 "disable_auto_failback": false, 00:16:59.711 "generate_uuids": false, 00:16:59.711 "transport_tos": 0, 00:16:59.711 "nvme_error_stat": false, 00:16:59.711 "rdma_srq_size": 0, 00:16:59.711 "io_path_stat": false, 00:16:59.711 "allow_accel_sequence": false, 00:16:59.711 "rdma_max_cq_size": 0, 00:16:59.711 "rdma_cm_event_timeout_ms": 0, 00:16:59.711 "dhchap_digests": [ 00:16:59.711 "sha256", 00:16:59.711 "sha384", 00:16:59.711 "sha512" 00:16:59.711 ], 00:16:59.711 "dhchap_dhgroups": [ 00:16:59.711 "null", 00:16:59.711 "ffdhe2048", 00:16:59.711 "ffdhe3072", 00:16:59.711 "ffdhe4096", 00:16:59.711 "ffdhe6144", 00:16:59.711 "ffdhe8192" 00:16:59.711 ] 00:16:59.711 } 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "method": "bdev_nvme_attach_controller", 00:16:59.711 "params": { 00:16:59.711 "name": "nvme0", 00:16:59.711 "trtype": "TCP", 00:16:59.711 "adrfam": "IPv4", 00:16:59.711 "traddr": "10.0.0.2", 00:16:59.711 "trsvcid": "4420", 00:16:59.711 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.711 "prchk_reftag": false, 00:16:59.711 "prchk_guard": false, 00:16:59.711 "ctrlr_loss_timeout_sec": 0, 00:16:59.711 "reconnect_delay_sec": 0, 00:16:59.711 "fast_io_fail_timeout_sec": 0, 00:16:59.711 "psk": "key0", 00:16:59.711 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.711 "hdgst": false, 00:16:59.711 "ddgst": false, 00:16:59.711 "multipath": "multipath" 00:16:59.711 } 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "method": "bdev_nvme_set_hotplug", 00:16:59.711 "params": { 00:16:59.711 "period_us": 100000, 00:16:59.711 "enable": false 00:16:59.711 } 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "method": "bdev_enable_histogram", 00:16:59.711 "params": { 00:16:59.711 "name": "nvme0n1", 00:16:59.711 "enable": true 00:16:59.711 } 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "method": "bdev_wait_for_examine" 00:16:59.711 } 00:16:59.711 ] 00:16:59.711 }, 00:16:59.711 { 00:16:59.711 "subsystem": "nbd", 00:16:59.711 "config": [] 00:16:59.711 } 00:16:59.711 ] 00:16:59.711 }' 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3731934 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3731934 ']' 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3731934 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3731934 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3731934' 00:16:59.711 killing process with pid 3731934 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3731934 00:16:59.711 Received shutdown signal, test time was about 1.000000 seconds 00:16:59.711 00:16:59.711 Latency(us) 00:16:59.711 [2024-11-26T18:23:33.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.711 [2024-11-26T18:23:33.576Z] =================================================================================================================== 00:16:59.711 [2024-11-26T18:23:33.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3731934 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3731731 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3731731 ']' 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3731731 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3731731 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3731731' 00:16:59.711 killing process with pid 3731731 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3731731 00:16:59.711 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3731731 00:16:59.972 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:16:59.972 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:59.972 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.972 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.972 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:16:59.972 "subsystems": [ 00:16:59.972 { 00:16:59.972 "subsystem": "keyring", 00:16:59.972 "config": [ 00:16:59.972 { 00:16:59.972 "method": "keyring_file_add_key", 00:16:59.972 "params": { 00:16:59.972 "name": "key0", 00:16:59.972 "path": "/tmp/tmp.rKxa3CyWst" 00:16:59.972 } 00:16:59.972 } 00:16:59.972 ] 00:16:59.972 }, 00:16:59.972 { 00:16:59.972 "subsystem": "iobuf", 00:16:59.972 "config": [ 00:16:59.972 { 00:16:59.972 "method": "iobuf_set_options", 00:16:59.972 "params": { 00:16:59.972 "small_pool_count": 8192, 00:16:59.972 "large_pool_count": 1024, 00:16:59.972 "small_bufsize": 8192, 00:16:59.972 "large_bufsize": 135168, 00:16:59.972 "enable_numa": false 00:16:59.972 } 00:16:59.972 } 00:16:59.972 ] 00:16:59.972 }, 00:16:59.972 { 00:16:59.972 "subsystem": "sock", 00:16:59.972 "config": [ 00:16:59.972 { 00:16:59.972 "method": "sock_set_default_impl", 00:16:59.972 "params": { 00:16:59.973 "impl_name": "posix" 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "sock_impl_set_options", 00:16:59.973 "params": { 00:16:59.973 "impl_name": "ssl", 00:16:59.973 "recv_buf_size": 4096, 00:16:59.973 "send_buf_size": 4096, 00:16:59.973 "enable_recv_pipe": true, 00:16:59.973 "enable_quickack": false, 00:16:59.973 "enable_placement_id": 0, 00:16:59.973 "enable_zerocopy_send_server": true, 00:16:59.973 "enable_zerocopy_send_client": false, 00:16:59.973 "zerocopy_threshold": 0, 00:16:59.973 "tls_version": 0, 00:16:59.973 "enable_ktls": false 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "sock_impl_set_options", 00:16:59.973 "params": { 00:16:59.973 "impl_name": "posix", 00:16:59.973 "recv_buf_size": 2097152, 00:16:59.973 "send_buf_size": 2097152, 00:16:59.973 "enable_recv_pipe": true, 00:16:59.973 "enable_quickack": false, 00:16:59.973 "enable_placement_id": 0, 00:16:59.973 "enable_zerocopy_send_server": true, 00:16:59.973 "enable_zerocopy_send_client": false, 00:16:59.973 "zerocopy_threshold": 0, 00:16:59.973 "tls_version": 0, 00:16:59.973 "enable_ktls": false 00:16:59.973 } 00:16:59.973 } 00:16:59.973 ] 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "subsystem": "vmd", 00:16:59.973 "config": [] 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "subsystem": "accel", 00:16:59.973 "config": [ 00:16:59.973 { 00:16:59.973 "method": "accel_set_options", 00:16:59.973 "params": { 00:16:59.973 "small_cache_size": 128, 00:16:59.973 "large_cache_size": 16, 00:16:59.973 "task_count": 2048, 00:16:59.973 "sequence_count": 2048, 00:16:59.973 "buf_count": 2048 00:16:59.973 } 00:16:59.973 } 00:16:59.973 ] 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "subsystem": "bdev", 00:16:59.973 "config": [ 00:16:59.973 { 00:16:59.973 "method": "bdev_set_options", 00:16:59.973 "params": { 00:16:59.973 "bdev_io_pool_size": 65535, 00:16:59.973 "bdev_io_cache_size": 256, 00:16:59.973 "bdev_auto_examine": true, 00:16:59.973 "iobuf_small_cache_size": 128, 00:16:59.973 "iobuf_large_cache_size": 16 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "bdev_raid_set_options", 00:16:59.973 "params": { 00:16:59.973 "process_window_size_kb": 1024, 00:16:59.973 "process_max_bandwidth_mb_sec": 0 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "bdev_iscsi_set_options", 00:16:59.973 "params": { 00:16:59.973 "timeout_sec": 30 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "bdev_nvme_set_options", 00:16:59.973 "params": { 00:16:59.973 "action_on_timeout": "none", 00:16:59.973 "timeout_us": 0, 00:16:59.973 "timeout_admin_us": 0, 00:16:59.973 "keep_alive_timeout_ms": 10000, 00:16:59.973 "arbitration_burst": 0, 00:16:59.973 "low_priority_weight": 0, 00:16:59.973 "medium_priority_weight": 0, 00:16:59.973 "high_priority_weight": 0, 00:16:59.973 "nvme_adminq_poll_period_us": 10000, 00:16:59.973 "nvme_ioq_poll_period_us": 0, 00:16:59.973 "io_queue_requests": 0, 00:16:59.973 "delay_cmd_submit": true, 00:16:59.973 "transport_retry_count": 4, 00:16:59.973 "bdev_retry_count": 3, 00:16:59.973 "transport_ack_timeout": 0, 00:16:59.973 "ctrlr_loss_timeout_sec": 0, 00:16:59.973 "reconnect_delay_sec": 0, 00:16:59.973 "fast_io_fail_timeout_sec": 0, 00:16:59.973 "disable_auto_failback": false, 00:16:59.973 "generate_uuids": false, 00:16:59.973 "transport_tos": 0, 00:16:59.973 "nvme_error_stat": false, 00:16:59.973 "rdma_srq_size": 0, 00:16:59.973 "io_path_stat": false, 00:16:59.973 "allow_accel_sequence": false, 00:16:59.973 "rdma_max_cq_size": 0, 00:16:59.973 "rdma_cm_event_timeout_ms": 0, 00:16:59.973 "dhchap_digests": [ 00:16:59.973 "sha256", 00:16:59.973 "sha384", 00:16:59.973 "sha512" 00:16:59.973 ], 00:16:59.973 "dhchap_dhgroups": [ 00:16:59.973 "null", 00:16:59.973 "ffdhe2048", 00:16:59.973 "ffdhe3072", 00:16:59.973 "ffdhe4096", 00:16:59.973 "ffdhe6144", 00:16:59.973 "ffdhe8192" 00:16:59.973 ] 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "bdev_nvme_set_hotplug", 00:16:59.973 "params": { 00:16:59.973 "period_us": 100000, 00:16:59.973 "enable": false 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "bdev_malloc_create", 00:16:59.973 "params": { 00:16:59.973 "name": "malloc0", 00:16:59.973 "num_blocks": 8192, 00:16:59.973 "block_size": 4096, 00:16:59.973 "physical_block_size": 4096, 00:16:59.973 "uuid": "036ce140-505c-44d8-add1-b6bce0605467", 00:16:59.973 "optimal_io_boundary": 0, 00:16:59.973 "md_size": 0, 00:16:59.973 "dif_type": 0, 00:16:59.973 "dif_is_head_of_md": false, 00:16:59.973 "dif_pi_format": 0 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "bdev_wait_for_examine" 00:16:59.973 } 00:16:59.973 ] 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "subsystem": "nbd", 00:16:59.973 "config": [] 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "subsystem": "scheduler", 00:16:59.973 "config": [ 00:16:59.973 { 00:16:59.973 "method": "framework_set_scheduler", 00:16:59.973 "params": { 00:16:59.973 "name": "static" 00:16:59.973 } 00:16:59.973 } 00:16:59.973 ] 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "subsystem": "nvmf", 00:16:59.973 "config": [ 00:16:59.973 { 00:16:59.973 "method": "nvmf_set_config", 00:16:59.973 "params": { 00:16:59.973 "discovery_filter": "match_any", 00:16:59.973 "admin_cmd_passthru": { 00:16:59.973 "identify_ctrlr": false 00:16:59.973 }, 00:16:59.973 "dhchap_digests": [ 00:16:59.973 "sha256", 00:16:59.973 "sha384", 00:16:59.973 "sha512" 00:16:59.973 ], 00:16:59.973 "dhchap_dhgroups": [ 00:16:59.973 "null", 00:16:59.973 "ffdhe2048", 00:16:59.973 "ffdhe3072", 00:16:59.973 "ffdhe4096", 00:16:59.973 "ffdhe6144", 00:16:59.973 "ffdhe8192" 00:16:59.973 ] 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "nvmf_set_max_subsystems", 00:16:59.973 "params": { 00:16:59.973 "max_subsystems": 1024 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "nvmf_set_crdt", 00:16:59.973 "params": { 00:16:59.973 "crdt1": 0, 00:16:59.973 "crdt2": 0, 00:16:59.973 "crdt3": 0 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "nvmf_create_transport", 00:16:59.973 "params": { 00:16:59.973 "trtype": "TCP", 00:16:59.973 "max_queue_depth": 128, 00:16:59.973 "max_io_qpairs_per_ctrlr": 127, 00:16:59.973 "in_capsule_data_size": 4096, 00:16:59.973 "max_io_size": 131072, 00:16:59.973 "io_unit_size": 131072, 00:16:59.973 "max_aq_depth": 128, 00:16:59.973 "num_shared_buffers": 511, 00:16:59.973 "buf_cache_size": 4294967295, 00:16:59.973 "dif_insert_or_strip": false, 00:16:59.973 "zcopy": false, 00:16:59.973 "c2h_success": false, 00:16:59.973 "sock_priority": 0, 00:16:59.973 "abort_timeout_sec": 1, 00:16:59.973 "ack_timeout": 0, 00:16:59.973 "data_wr_pool_size": 0 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "nvmf_create_subsystem", 00:16:59.973 "params": { 00:16:59.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.973 "allow_any_host": false, 00:16:59.973 "serial_number": "00000000000000000000", 00:16:59.973 "model_number": "SPDK bdev Controller", 00:16:59.973 "max_namespaces": 32, 00:16:59.973 "min_cntlid": 1, 00:16:59.973 "max_cntlid": 65519, 00:16:59.973 "ana_reporting": false 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "nvmf_subsystem_add_host", 00:16:59.973 "params": { 00:16:59.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.973 "host": "nqn.2016-06.io.spdk:host1", 00:16:59.973 "psk": "key0" 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "nvmf_subsystem_add_ns", 00:16:59.973 "params": { 00:16:59.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.973 "namespace": { 00:16:59.973 "nsid": 1, 00:16:59.973 "bdev_name": "malloc0", 00:16:59.973 "nguid": "036CE140505C44D8ADD1B6BCE0605467", 00:16:59.973 "uuid": "036ce140-505c-44d8-add1-b6bce0605467", 00:16:59.973 "no_auto_visible": false 00:16:59.973 } 00:16:59.973 } 00:16:59.973 }, 00:16:59.973 { 00:16:59.973 "method": "nvmf_subsystem_add_listener", 00:16:59.973 "params": { 00:16:59.973 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.973 "listen_address": { 00:16:59.973 "trtype": "TCP", 00:16:59.973 "adrfam": "IPv4", 00:16:59.973 "traddr": "10.0.0.2", 00:16:59.973 "trsvcid": "4420" 00:16:59.973 }, 00:16:59.973 "secure_channel": false, 00:16:59.973 "sock_impl": "ssl" 00:16:59.973 } 00:16:59.973 } 00:16:59.973 ] 00:16:59.973 } 00:16:59.973 ] 00:16:59.973 }' 00:16:59.973 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3732402 00:16:59.973 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3732402 00:16:59.974 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3732402 ']' 00:16:59.974 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.974 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.974 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.974 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.974 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:59.974 19:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:16:59.974 [2024-11-26 19:23:33.714638] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:16:59.974 [2024-11-26 19:23:33.714690] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.974 [2024-11-26 19:23:33.784971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.974 [2024-11-26 19:23:33.813539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.974 [2024-11-26 19:23:33.813567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.974 [2024-11-26 19:23:33.813573] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.974 [2024-11-26 19:23:33.813577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.974 [2024-11-26 19:23:33.813581] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.974 [2024-11-26 19:23:33.814058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.233 [2024-11-26 19:23:34.008423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.233 [2024-11-26 19:23:34.040469] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:00.233 [2024-11-26 19:23:34.040686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3732587 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3732587 /var/tmp/bdevperf.sock 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3732587 ']' 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:00.803 19:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:17:00.803 "subsystems": [ 00:17:00.803 { 00:17:00.803 "subsystem": "keyring", 00:17:00.803 "config": [ 00:17:00.803 { 00:17:00.803 "method": "keyring_file_add_key", 00:17:00.803 "params": { 00:17:00.803 "name": "key0", 00:17:00.803 "path": "/tmp/tmp.rKxa3CyWst" 00:17:00.803 } 00:17:00.803 } 00:17:00.803 ] 00:17:00.803 }, 00:17:00.803 { 00:17:00.803 "subsystem": "iobuf", 00:17:00.803 "config": [ 00:17:00.803 { 00:17:00.803 "method": "iobuf_set_options", 00:17:00.803 "params": { 00:17:00.803 "small_pool_count": 8192, 00:17:00.803 "large_pool_count": 1024, 00:17:00.803 "small_bufsize": 8192, 00:17:00.803 "large_bufsize": 135168, 00:17:00.803 "enable_numa": false 00:17:00.803 } 00:17:00.803 } 00:17:00.803 ] 00:17:00.803 }, 00:17:00.803 { 00:17:00.803 "subsystem": "sock", 00:17:00.803 "config": [ 00:17:00.803 { 00:17:00.803 "method": "sock_set_default_impl", 00:17:00.803 "params": { 00:17:00.803 "impl_name": "posix" 00:17:00.803 } 00:17:00.803 }, 00:17:00.803 { 00:17:00.803 "method": "sock_impl_set_options", 00:17:00.803 "params": { 00:17:00.803 "impl_name": "ssl", 00:17:00.803 "recv_buf_size": 4096, 00:17:00.803 "send_buf_size": 4096, 00:17:00.803 "enable_recv_pipe": true, 00:17:00.803 "enable_quickack": false, 00:17:00.803 "enable_placement_id": 0, 00:17:00.803 "enable_zerocopy_send_server": true, 00:17:00.803 "enable_zerocopy_send_client": false, 00:17:00.803 "zerocopy_threshold": 0, 00:17:00.803 "tls_version": 0, 00:17:00.803 "enable_ktls": false 00:17:00.803 } 00:17:00.803 }, 00:17:00.803 { 00:17:00.803 "method": "sock_impl_set_options", 00:17:00.803 "params": { 00:17:00.803 "impl_name": "posix", 00:17:00.803 "recv_buf_size": 2097152, 00:17:00.804 "send_buf_size": 2097152, 00:17:00.804 "enable_recv_pipe": true, 00:17:00.804 "enable_quickack": false, 00:17:00.804 "enable_placement_id": 0, 00:17:00.804 "enable_zerocopy_send_server": true, 00:17:00.804 "enable_zerocopy_send_client": false, 00:17:00.804 "zerocopy_threshold": 0, 00:17:00.804 "tls_version": 0, 00:17:00.804 "enable_ktls": false 00:17:00.804 } 00:17:00.804 } 00:17:00.804 ] 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "subsystem": "vmd", 00:17:00.804 "config": [] 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "subsystem": "accel", 00:17:00.804 "config": [ 00:17:00.804 { 00:17:00.804 "method": "accel_set_options", 00:17:00.804 "params": { 00:17:00.804 "small_cache_size": 128, 00:17:00.804 "large_cache_size": 16, 00:17:00.804 "task_count": 2048, 00:17:00.804 "sequence_count": 2048, 00:17:00.804 "buf_count": 2048 00:17:00.804 } 00:17:00.804 } 00:17:00.804 ] 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "subsystem": "bdev", 00:17:00.804 "config": [ 00:17:00.804 { 00:17:00.804 "method": "bdev_set_options", 00:17:00.804 "params": { 00:17:00.804 "bdev_io_pool_size": 65535, 00:17:00.804 "bdev_io_cache_size": 256, 00:17:00.804 "bdev_auto_examine": true, 00:17:00.804 "iobuf_small_cache_size": 128, 00:17:00.804 "iobuf_large_cache_size": 16 00:17:00.804 } 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "method": "bdev_raid_set_options", 00:17:00.804 "params": { 00:17:00.804 "process_window_size_kb": 1024, 00:17:00.804 "process_max_bandwidth_mb_sec": 0 00:17:00.804 } 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "method": "bdev_iscsi_set_options", 00:17:00.804 "params": { 00:17:00.804 "timeout_sec": 30 00:17:00.804 } 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "method": "bdev_nvme_set_options", 00:17:00.804 "params": { 00:17:00.804 "action_on_timeout": "none", 00:17:00.804 "timeout_us": 0, 00:17:00.804 "timeout_admin_us": 0, 00:17:00.804 "keep_alive_timeout_ms": 10000, 00:17:00.804 "arbitration_burst": 0, 00:17:00.804 "low_priority_weight": 0, 00:17:00.804 "medium_priority_weight": 0, 00:17:00.804 "high_priority_weight": 0, 00:17:00.804 "nvme_adminq_poll_period_us": 10000, 00:17:00.804 "nvme_ioq_poll_period_us": 0, 00:17:00.804 "io_queue_requests": 512, 00:17:00.804 "delay_cmd_submit": true, 00:17:00.804 "transport_retry_count": 4, 00:17:00.804 "bdev_retry_count": 3, 00:17:00.804 "transport_ack_timeout": 0, 00:17:00.804 "ctrlr_loss_timeout_sec": 0, 00:17:00.804 "reconnect_delay_sec": 0, 00:17:00.804 "fast_io_fail_timeout_sec": 0, 00:17:00.804 "disable_auto_failback": false, 00:17:00.804 "generate_uuids": false, 00:17:00.804 "transport_tos": 0, 00:17:00.804 "nvme_error_stat": false, 00:17:00.804 "rdma_srq_size": 0, 00:17:00.804 "io_path_stat": false, 00:17:00.804 "allow_accel_sequence": false, 00:17:00.804 "rdma_max_cq_size": 0, 00:17:00.804 "rdma_cm_event_timeout_ms": 0, 00:17:00.804 "dhchap_digests": [ 00:17:00.804 "sha256", 00:17:00.804 "sha384", 00:17:00.804 "sha512" 00:17:00.804 ], 00:17:00.804 "dhchap_dhgroups": [ 00:17:00.804 "null", 00:17:00.804 "ffdhe2048", 00:17:00.804 "ffdhe3072", 00:17:00.804 "ffdhe4096", 00:17:00.804 "ffdhe6144", 00:17:00.804 "ffdhe8192" 00:17:00.804 ] 00:17:00.804 } 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "method": "bdev_nvme_attach_controller", 00:17:00.804 "params": { 00:17:00.804 "name": "nvme0", 00:17:00.804 "trtype": "TCP", 00:17:00.804 "adrfam": "IPv4", 00:17:00.804 "traddr": "10.0.0.2", 00:17:00.804 "trsvcid": "4420", 00:17:00.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.804 "prchk_reftag": false, 00:17:00.804 "prchk_guard": false, 00:17:00.804 "ctrlr_loss_timeout_sec": 0, 00:17:00.804 "reconnect_delay_sec": 0, 00:17:00.804 "fast_io_fail_timeout_sec": 0, 00:17:00.804 "psk": "key0", 00:17:00.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:00.804 "hdgst": false, 00:17:00.804 "ddgst": false, 00:17:00.804 "multipath": "multipath" 00:17:00.804 } 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "method": "bdev_nvme_set_hotplug", 00:17:00.804 "params": { 00:17:00.804 "period_us": 100000, 00:17:00.804 "enable": false 00:17:00.804 } 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "method": "bdev_enable_histogram", 00:17:00.804 "params": { 00:17:00.804 "name": "nvme0n1", 00:17:00.804 "enable": true 00:17:00.804 } 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "method": "bdev_wait_for_examine" 00:17:00.804 } 00:17:00.804 ] 00:17:00.804 }, 00:17:00.804 { 00:17:00.804 "subsystem": "nbd", 00:17:00.804 "config": [] 00:17:00.804 } 00:17:00.804 ] 00:17:00.804 }' 00:17:00.804 [2024-11-26 19:23:34.546887] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:17:00.804 [2024-11-26 19:23:34.546939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3732587 ] 00:17:00.804 [2024-11-26 19:23:34.611093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.804 [2024-11-26 19:23:34.640973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.066 [2024-11-26 19:23:34.777103] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.634 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.634 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:01.634 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:01.634 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:17:01.634 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.634 19:23:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:01.895 Running I/O for 1 seconds... 00:17:02.836 3976.00 IOPS, 15.53 MiB/s 00:17:02.836 Latency(us) 00:17:02.836 [2024-11-26T18:23:36.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.836 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:02.836 Verification LBA range: start 0x0 length 0x2000 00:17:02.836 nvme0n1 : 1.03 3965.80 15.49 0.00 0.00 31789.87 4669.44 55268.69 00:17:02.836 [2024-11-26T18:23:36.701Z] =================================================================================================================== 00:17:02.836 [2024-11-26T18:23:36.701Z] Total : 3965.80 15.49 0.00 0.00 31789.87 4669.44 55268.69 00:17:02.836 { 00:17:02.836 "results": [ 00:17:02.836 { 00:17:02.836 "job": "nvme0n1", 00:17:02.836 "core_mask": "0x2", 00:17:02.836 "workload": "verify", 00:17:02.836 "status": "finished", 00:17:02.836 "verify_range": { 00:17:02.836 "start": 0, 00:17:02.836 "length": 8192 00:17:02.836 }, 00:17:02.836 "queue_depth": 128, 00:17:02.836 "io_size": 4096, 00:17:02.836 "runtime": 1.034848, 00:17:02.836 "iops": 3965.7998082810227, 00:17:02.836 "mibps": 15.491405501097745, 00:17:02.836 "io_failed": 0, 00:17:02.836 "io_timeout": 0, 00:17:02.836 "avg_latency_us": 31789.873736192338, 00:17:02.836 "min_latency_us": 4669.44, 00:17:02.836 "max_latency_us": 55268.693333333336 00:17:02.836 } 00:17:02.836 ], 00:17:02.836 "core_count": 1 00:17:02.836 } 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:02.836 nvmf_trace.0 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3732587 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3732587 ']' 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3732587 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.836 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3732587 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3732587' 00:17:03.095 killing process with pid 3732587 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3732587 00:17:03.095 Received shutdown signal, test time was about 1.000000 seconds 00:17:03.095 00:17:03.095 Latency(us) 00:17:03.095 [2024-11-26T18:23:36.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.095 [2024-11-26T18:23:36.960Z] =================================================================================================================== 00:17:03.095 [2024-11-26T18:23:36.960Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3732587 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:03.095 rmmod nvme_tcp 00:17:03.095 rmmod nvme_fabrics 00:17:03.095 rmmod nvme_keyring 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3732402 ']' 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3732402 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3732402 ']' 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3732402 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3732402 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3732402' 00:17:03.095 killing process with pid 3732402 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3732402 00:17:03.095 19:23:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3732402 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.355 19:23:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.262 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:05.262 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xit6rdGDM2 /tmp/tmp.jFBz4v5hhh /tmp/tmp.rKxa3CyWst 00:17:05.262 00:17:05.262 real 1m14.957s 00:17:05.262 user 2m0.187s 00:17:05.262 sys 0m21.823s 00:17:05.262 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:05.262 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.262 ************************************ 00:17:05.262 END TEST nvmf_tls 00:17:05.262 ************************************ 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:05.522 ************************************ 00:17:05.522 START TEST nvmf_fips 00:17:05.522 ************************************ 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:05.522 * Looking for test storage... 00:17:05.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:05.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.522 --rc genhtml_branch_coverage=1 00:17:05.522 --rc genhtml_function_coverage=1 00:17:05.522 --rc genhtml_legend=1 00:17:05.522 --rc geninfo_all_blocks=1 00:17:05.522 --rc geninfo_unexecuted_blocks=1 00:17:05.522 00:17:05.522 ' 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:05.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.522 --rc genhtml_branch_coverage=1 00:17:05.522 --rc genhtml_function_coverage=1 00:17:05.522 --rc genhtml_legend=1 00:17:05.522 --rc geninfo_all_blocks=1 00:17:05.522 --rc geninfo_unexecuted_blocks=1 00:17:05.522 00:17:05.522 ' 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:05.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.522 --rc genhtml_branch_coverage=1 00:17:05.522 --rc genhtml_function_coverage=1 00:17:05.522 --rc genhtml_legend=1 00:17:05.522 --rc geninfo_all_blocks=1 00:17:05.522 --rc geninfo_unexecuted_blocks=1 00:17:05.522 00:17:05.522 ' 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:05.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.522 --rc genhtml_branch_coverage=1 00:17:05.522 --rc genhtml_function_coverage=1 00:17:05.522 --rc genhtml_legend=1 00:17:05.522 --rc geninfo_all_blocks=1 00:17:05.522 --rc geninfo_unexecuted_blocks=1 00:17:05.522 00:17:05.522 ' 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:05.522 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:05.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:17:05.523 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:17:05.524 Error setting digest 00:17:05.524 40227392047F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:17:05.524 40227392047F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:05.524 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:05.783 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:05.783 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:17:05.783 19:23:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:11.058 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:11.058 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:11.058 Found net devices under 0000:31:00.0: cvl_0_0 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:11.058 Found net devices under 0000:31:00.1: cvl_0_1 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:11.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:11.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:17:11.058 00:17:11.058 --- 10.0.0.2 ping statistics --- 00:17:11.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.058 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:11.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:11.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:17:11.058 00:17:11.058 --- 10.0.0.1 ping statistics --- 00:17:11.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:11.058 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3737457 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3737457 00:17:11.058 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3737457 ']' 00:17:11.059 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.059 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.059 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.059 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.059 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:11.059 19:23:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:11.059 [2024-11-26 19:23:44.745015] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:17:11.059 [2024-11-26 19:23:44.745068] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.059 [2024-11-26 19:23:44.816455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.059 [2024-11-26 19:23:44.845109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.059 [2024-11-26 19:23:44.845134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.059 [2024-11-26 19:23:44.845140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.059 [2024-11-26 19:23:44.845145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.059 [2024-11-26 19:23:44.845149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.059 [2024-11-26 19:23:44.845641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Lmq 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Lmq 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Lmq 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Lmq 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.996 [2024-11-26 19:23:45.674553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:11.996 [2024-11-26 19:23:45.690555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:11.996 [2024-11-26 19:23:45.690731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:11.996 malloc0 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3737808 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3737808 /var/tmp/bdevperf.sock 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3737808 ']' 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:11.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:11.996 19:23:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:11.996 [2024-11-26 19:23:45.791411] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:17:11.996 [2024-11-26 19:23:45.791463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3737808 ] 00:17:12.255 [2024-11-26 19:23:45.869524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.255 [2024-11-26 19:23:45.904638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.823 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.823 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:17:12.823 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Lmq 00:17:13.082 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:13.082 [2024-11-26 19:23:46.837094] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:13.082 TLSTESTn1 00:17:13.082 19:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:13.344 Running I/O for 10 seconds... 00:17:15.224 3275.00 IOPS, 12.79 MiB/s [2024-11-26T18:23:50.024Z] 3569.50 IOPS, 13.94 MiB/s [2024-11-26T18:23:51.404Z] 3889.00 IOPS, 15.19 MiB/s [2024-11-26T18:23:52.344Z] 4293.00 IOPS, 16.77 MiB/s [2024-11-26T18:23:53.286Z] 4410.20 IOPS, 17.23 MiB/s [2024-11-26T18:23:54.227Z] 4406.50 IOPS, 17.21 MiB/s [2024-11-26T18:23:55.166Z] 4382.29 IOPS, 17.12 MiB/s [2024-11-26T18:23:56.106Z] 4492.88 IOPS, 17.55 MiB/s [2024-11-26T18:23:57.047Z] 4484.33 IOPS, 17.52 MiB/s [2024-11-26T18:23:57.307Z] 4468.30 IOPS, 17.45 MiB/s 00:17:23.442 Latency(us) 00:17:23.442 [2024-11-26T18:23:57.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.442 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:23.442 Verification LBA range: start 0x0 length 0x2000 00:17:23.442 TLSTESTn1 : 10.05 4456.93 17.41 0.00 0.00 28630.90 6744.75 84322.99 00:17:23.442 [2024-11-26T18:23:57.307Z] =================================================================================================================== 00:17:23.442 [2024-11-26T18:23:57.307Z] Total : 4456.93 17.41 0.00 0.00 28630.90 6744.75 84322.99 00:17:23.442 { 00:17:23.442 "results": [ 00:17:23.442 { 00:17:23.442 "job": "TLSTESTn1", 00:17:23.442 "core_mask": "0x4", 00:17:23.442 "workload": "verify", 00:17:23.442 "status": "finished", 00:17:23.442 "verify_range": { 00:17:23.442 "start": 0, 00:17:23.442 "length": 8192 00:17:23.442 }, 00:17:23.442 "queue_depth": 128, 00:17:23.442 "io_size": 4096, 00:17:23.442 "runtime": 10.053557, 00:17:23.442 "iops": 4456.930019892462, 00:17:23.442 "mibps": 17.40988289020493, 00:17:23.442 "io_failed": 0, 00:17:23.442 "io_timeout": 0, 00:17:23.442 "avg_latency_us": 28630.89526394096, 00:17:23.442 "min_latency_us": 6744.746666666667, 00:17:23.442 "max_latency_us": 84322.98666666666 00:17:23.442 } 00:17:23.442 ], 00:17:23.442 "core_count": 1 00:17:23.442 } 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:23.442 nvmf_trace.0 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3737808 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3737808 ']' 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3737808 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3737808 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3737808' 00:17:23.442 killing process with pid 3737808 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3737808 00:17:23.442 Received shutdown signal, test time was about 10.000000 seconds 00:17:23.442 00:17:23.442 Latency(us) 00:17:23.442 [2024-11-26T18:23:57.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.442 [2024-11-26T18:23:57.307Z] =================================================================================================================== 00:17:23.442 [2024-11-26T18:23:57.307Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3737808 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:23.442 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:23.701 rmmod nvme_tcp 00:17:23.701 rmmod nvme_fabrics 00:17:23.701 rmmod nvme_keyring 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3737457 ']' 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3737457 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3737457 ']' 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3737457 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3737457 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3737457' 00:17:23.701 killing process with pid 3737457 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3737457 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3737457 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:17:23.701 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:23.702 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:17:23.702 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:23.702 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:23.702 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.702 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:23.702 19:23:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Lmq 00:17:26.240 00:17:26.240 real 0m20.408s 00:17:26.240 user 0m23.914s 00:17:26.240 sys 0m7.216s 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:26.240 ************************************ 00:17:26.240 END TEST nvmf_fips 00:17:26.240 ************************************ 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.240 ************************************ 00:17:26.240 START TEST nvmf_control_msg_list 00:17:26.240 ************************************ 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:17:26.240 * Looking for test storage... 00:17:26.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.240 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:26.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.240 --rc genhtml_branch_coverage=1 00:17:26.240 --rc genhtml_function_coverage=1 00:17:26.240 --rc genhtml_legend=1 00:17:26.241 --rc geninfo_all_blocks=1 00:17:26.241 --rc geninfo_unexecuted_blocks=1 00:17:26.241 00:17:26.241 ' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:26.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.241 --rc genhtml_branch_coverage=1 00:17:26.241 --rc genhtml_function_coverage=1 00:17:26.241 --rc genhtml_legend=1 00:17:26.241 --rc geninfo_all_blocks=1 00:17:26.241 --rc geninfo_unexecuted_blocks=1 00:17:26.241 00:17:26.241 ' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:26.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.241 --rc genhtml_branch_coverage=1 00:17:26.241 --rc genhtml_function_coverage=1 00:17:26.241 --rc genhtml_legend=1 00:17:26.241 --rc geninfo_all_blocks=1 00:17:26.241 --rc geninfo_unexecuted_blocks=1 00:17:26.241 00:17:26.241 ' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:26.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.241 --rc genhtml_branch_coverage=1 00:17:26.241 --rc genhtml_function_coverage=1 00:17:26.241 --rc genhtml_legend=1 00:17:26.241 --rc geninfo_all_blocks=1 00:17:26.241 --rc geninfo_unexecuted_blocks=1 00:17:26.241 00:17:26.241 ' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:26.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:17:26.241 19:23:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:31.516 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:31.516 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:31.516 Found net devices under 0000:31:00.0: cvl_0_0 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.516 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:31.517 Found net devices under 0000:31:00.1: cvl_0_1 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.517 19:24:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:31.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:17:31.517 00:17:31.517 --- 10.0.0.2 ping statistics --- 00:17:31.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.517 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:17:31.517 00:17:31.517 --- 10.0.0.1 ping statistics --- 00:17:31.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.517 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3744629 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3744629 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3744629 ']' 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:31.517 19:24:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:31.517 [2024-11-26 19:24:05.233596] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:17:31.517 [2024-11-26 19:24:05.233665] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.517 [2024-11-26 19:24:05.325367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.517 [2024-11-26 19:24:05.377034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.517 [2024-11-26 19:24:05.377084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.517 [2024-11-26 19:24:05.377093] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.517 [2024-11-26 19:24:05.377114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.517 [2024-11-26 19:24:05.377120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.517 [2024-11-26 19:24:05.377906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 [2024-11-26 19:24:06.061864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 Malloc0 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:32.458 [2024-11-26 19:24:06.096916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3744955 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3744956 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3744957 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3744955 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:32.458 19:24:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:32.458 [2024-11-26 19:24:06.155455] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:32.458 [2024-11-26 19:24:06.155705] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:32.458 [2024-11-26 19:24:06.165412] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:33.399 Initializing NVMe Controllers 00:17:33.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:33.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:17:33.399 Initialization complete. Launching workers. 00:17:33.399 ======================================================== 00:17:33.399 Latency(us) 00:17:33.399 Device Information : IOPS MiB/s Average min max 00:17:33.399 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1646.00 6.43 607.74 115.65 839.89 00:17:33.399 ======================================================== 00:17:33.399 Total : 1646.00 6.43 607.74 115.65 839.89 00:17:33.399 00:17:33.399 Initializing NVMe Controllers 00:17:33.399 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:33.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:17:33.399 Initialization complete. Launching workers. 00:17:33.399 ======================================================== 00:17:33.399 Latency(us) 00:17:33.399 Device Information : IOPS MiB/s Average min max 00:17:33.400 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40897.43 40764.84 40956.58 00:17:33.400 ======================================================== 00:17:33.400 Total : 25.00 0.10 40897.43 40764.84 40956.58 00:17:33.400 00:17:33.400 Initializing NVMe Controllers 00:17:33.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:33.400 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:17:33.400 Initialization complete. Launching workers. 00:17:33.400 ======================================================== 00:17:33.400 Latency(us) 00:17:33.400 Device Information : IOPS MiB/s Average min max 00:17:33.400 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1618.00 6.32 618.09 120.00 891.86 00:17:33.400 ======================================================== 00:17:33.400 Total : 1618.00 6.32 618.09 120.00 891.86 00:17:33.400 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3744956 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3744957 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:33.659 rmmod nvme_tcp 00:17:33.659 rmmod nvme_fabrics 00:17:33.659 rmmod nvme_keyring 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3744629 ']' 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3744629 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3744629 ']' 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3744629 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3744629 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3744629' 00:17:33.659 killing process with pid 3744629 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3744629 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3744629 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.659 19:24:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:36.196 00:17:36.196 real 0m9.925s 00:17:36.196 user 0m6.701s 00:17:36.196 sys 0m4.863s 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:17:36.196 ************************************ 00:17:36.196 END TEST nvmf_control_msg_list 00:17:36.196 ************************************ 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:36.196 ************************************ 00:17:36.196 START TEST nvmf_wait_for_buf 00:17:36.196 ************************************ 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:17:36.196 * Looking for test storage... 00:17:36.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.196 --rc genhtml_branch_coverage=1 00:17:36.196 --rc genhtml_function_coverage=1 00:17:36.196 --rc genhtml_legend=1 00:17:36.196 --rc geninfo_all_blocks=1 00:17:36.196 --rc geninfo_unexecuted_blocks=1 00:17:36.196 00:17:36.196 ' 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.196 --rc genhtml_branch_coverage=1 00:17:36.196 --rc genhtml_function_coverage=1 00:17:36.196 --rc genhtml_legend=1 00:17:36.196 --rc geninfo_all_blocks=1 00:17:36.196 --rc geninfo_unexecuted_blocks=1 00:17:36.196 00:17:36.196 ' 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.196 --rc genhtml_branch_coverage=1 00:17:36.196 --rc genhtml_function_coverage=1 00:17:36.196 --rc genhtml_legend=1 00:17:36.196 --rc geninfo_all_blocks=1 00:17:36.196 --rc geninfo_unexecuted_blocks=1 00:17:36.196 00:17:36.196 ' 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:36.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:36.196 --rc genhtml_branch_coverage=1 00:17:36.196 --rc genhtml_function_coverage=1 00:17:36.196 --rc genhtml_legend=1 00:17:36.196 --rc geninfo_all_blocks=1 00:17:36.196 --rc geninfo_unexecuted_blocks=1 00:17:36.196 00:17:36.196 ' 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:36.196 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:36.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:17:36.197 19:24:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:41.599 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:41.599 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:41.599 Found net devices under 0000:31:00.0: cvl_0_0 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.599 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:41.600 Found net devices under 0000:31:00.1: cvl_0_1 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.600 19:24:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:41.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:17:41.600 00:17:41.600 --- 10.0.0.2 ping statistics --- 00:17:41.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.600 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:17:41.600 00:17:41.600 --- 10.0.0.1 ping statistics --- 00:17:41.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.600 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3750079 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3750079 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3750079 ']' 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:41.600 19:24:15 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:41.600 [2024-11-26 19:24:15.294446] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:17:41.600 [2024-11-26 19:24:15.294509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.600 [2024-11-26 19:24:15.386805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.600 [2024-11-26 19:24:15.436637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.600 [2024-11-26 19:24:15.436686] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.600 [2024-11-26 19:24:15.436695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.600 [2024-11-26 19:24:15.436702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.600 [2024-11-26 19:24:15.436708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.600 [2024-11-26 19:24:15.437301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 Malloc0 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 [2024-11-26 19:24:16.231286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:42.540 [2024-11-26 19:24:16.255599] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.540 19:24:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:42.540 [2024-11-26 19:24:16.336932] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:44.447 Initializing NVMe Controllers 00:17:44.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:17:44.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:17:44.447 Initialization complete. Launching workers. 00:17:44.447 ======================================================== 00:17:44.447 Latency(us) 00:17:44.447 Device Information : IOPS MiB/s Average min max 00:17:44.447 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 24.96 3.12 169189.59 47863.91 199538.40 00:17:44.447 ======================================================== 00:17:44.447 Total : 24.96 3.12 169189.59 47863.91 199538.40 00:17:44.447 00:17:44.447 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:17:44.447 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.447 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:44.447 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:17:44.447 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.448 rmmod nvme_tcp 00:17:44.448 rmmod nvme_fabrics 00:17:44.448 rmmod nvme_keyring 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3750079 ']' 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3750079 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3750079 ']' 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3750079 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3750079 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3750079' 00:17:44.448 killing process with pid 3750079 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3750079 00:17:44.448 19:24:17 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3750079 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.448 19:24:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:46.357 00:17:46.357 real 0m10.565s 00:17:46.357 user 0m4.375s 00:17:46.357 sys 0m4.590s 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:17:46.357 ************************************ 00:17:46.357 END TEST nvmf_wait_for_buf 00:17:46.357 ************************************ 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:17:46.357 19:24:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:51.635 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:51.635 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:51.635 Found net devices under 0000:31:00.0: cvl_0_0 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:51.635 Found net devices under 0000:31:00.1: cvl_0_1 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.635 ************************************ 00:17:51.635 START TEST nvmf_perf_adq 00:17:51.635 ************************************ 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:17:51.635 * Looking for test storage... 00:17:51.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:17:51.635 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:51.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.896 --rc genhtml_branch_coverage=1 00:17:51.896 --rc genhtml_function_coverage=1 00:17:51.896 --rc genhtml_legend=1 00:17:51.896 --rc geninfo_all_blocks=1 00:17:51.896 --rc geninfo_unexecuted_blocks=1 00:17:51.896 00:17:51.896 ' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:51.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.896 --rc genhtml_branch_coverage=1 00:17:51.896 --rc genhtml_function_coverage=1 00:17:51.896 --rc genhtml_legend=1 00:17:51.896 --rc geninfo_all_blocks=1 00:17:51.896 --rc geninfo_unexecuted_blocks=1 00:17:51.896 00:17:51.896 ' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:51.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.896 --rc genhtml_branch_coverage=1 00:17:51.896 --rc genhtml_function_coverage=1 00:17:51.896 --rc genhtml_legend=1 00:17:51.896 --rc geninfo_all_blocks=1 00:17:51.896 --rc geninfo_unexecuted_blocks=1 00:17:51.896 00:17:51.896 ' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:51.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.896 --rc genhtml_branch_coverage=1 00:17:51.896 --rc genhtml_function_coverage=1 00:17:51.896 --rc genhtml_legend=1 00:17:51.896 --rc geninfo_all_blocks=1 00:17:51.896 --rc geninfo_unexecuted_blocks=1 00:17:51.896 00:17:51.896 ' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.896 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:17:51.897 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:17:51.897 19:24:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:57.174 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:57.174 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:57.174 Found net devices under 0000:31:00.0: cvl_0_0 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:57.174 Found net devices under 0000:31:00.1: cvl_0_1 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:17:57.174 19:24:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:17:58.553 19:24:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:18:01.093 19:24:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:06.369 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:06.369 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:06.369 Found net devices under 0000:31:00.0: cvl_0_0 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.369 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:06.370 Found net devices under 0000:31:00.1: cvl_0_1 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:06.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:18:06.370 00:18:06.370 --- 10.0.0.2 ping statistics --- 00:18:06.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.370 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:18:06.370 00:18:06.370 --- 10.0.0.1 ping statistics --- 00:18:06.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.370 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3760931 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3760931 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3760931 ']' 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.370 19:24:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:06.370 [2024-11-26 19:24:39.773833] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:18:06.370 [2024-11-26 19:24:39.773884] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:06.370 [2024-11-26 19:24:39.862906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.370 [2024-11-26 19:24:39.915459] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.370 [2024-11-26 19:24:39.915515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.370 [2024-11-26 19:24:39.915525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:06.370 [2024-11-26 19:24:39.915532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:06.370 [2024-11-26 19:24:39.915538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.370 [2024-11-26 19:24:39.917730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.370 [2024-11-26 19:24:39.917929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.370 [2024-11-26 19:24:39.918086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.370 [2024-11-26 19:24:39.918086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 [2024-11-26 19:24:40.685182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 Malloc1 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:06.939 [2024-11-26 19:24:40.739915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3761023 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:18:06.939 19:24:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:18:09.480 "tick_rate": 2400000000, 00:18:09.480 "poll_groups": [ 00:18:09.480 { 00:18:09.480 "name": "nvmf_tgt_poll_group_000", 00:18:09.480 "admin_qpairs": 1, 00:18:09.480 "io_qpairs": 1, 00:18:09.480 "current_admin_qpairs": 1, 00:18:09.480 "current_io_qpairs": 1, 00:18:09.480 "pending_bdev_io": 0, 00:18:09.480 "completed_nvme_io": 26409, 00:18:09.480 "transports": [ 00:18:09.480 { 00:18:09.480 "trtype": "TCP" 00:18:09.480 } 00:18:09.480 ] 00:18:09.480 }, 00:18:09.480 { 00:18:09.480 "name": "nvmf_tgt_poll_group_001", 00:18:09.480 "admin_qpairs": 0, 00:18:09.480 "io_qpairs": 1, 00:18:09.480 "current_admin_qpairs": 0, 00:18:09.480 "current_io_qpairs": 1, 00:18:09.480 "pending_bdev_io": 0, 00:18:09.480 "completed_nvme_io": 26071, 00:18:09.480 "transports": [ 00:18:09.480 { 00:18:09.480 "trtype": "TCP" 00:18:09.480 } 00:18:09.480 ] 00:18:09.480 }, 00:18:09.480 { 00:18:09.480 "name": "nvmf_tgt_poll_group_002", 00:18:09.480 "admin_qpairs": 0, 00:18:09.480 "io_qpairs": 1, 00:18:09.480 "current_admin_qpairs": 0, 00:18:09.480 "current_io_qpairs": 1, 00:18:09.480 "pending_bdev_io": 0, 00:18:09.480 "completed_nvme_io": 25279, 00:18:09.480 "transports": [ 00:18:09.480 { 00:18:09.480 "trtype": "TCP" 00:18:09.480 } 00:18:09.480 ] 00:18:09.480 }, 00:18:09.480 { 00:18:09.480 "name": "nvmf_tgt_poll_group_003", 00:18:09.480 "admin_qpairs": 0, 00:18:09.480 "io_qpairs": 1, 00:18:09.480 "current_admin_qpairs": 0, 00:18:09.480 "current_io_qpairs": 1, 00:18:09.480 "pending_bdev_io": 0, 00:18:09.480 "completed_nvme_io": 26218, 00:18:09.480 "transports": [ 00:18:09.480 { 00:18:09.480 "trtype": "TCP" 00:18:09.480 } 00:18:09.480 ] 00:18:09.480 } 00:18:09.480 ] 00:18:09.480 }' 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:18:09.480 19:24:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3761023 00:18:17.617 Initializing NVMe Controllers 00:18:17.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:17.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:17.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:17.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:17.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:17.617 Initialization complete. Launching workers. 00:18:17.617 ======================================================== 00:18:17.617 Latency(us) 00:18:17.617 Device Information : IOPS MiB/s Average min max 00:18:17.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14168.84 55.35 4516.38 1572.51 8113.85 00:18:17.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14273.54 55.76 4483.46 1260.13 7698.17 00:18:17.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13525.95 52.84 4731.63 1141.66 9210.24 00:18:17.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13840.94 54.07 4634.49 1170.68 44658.80 00:18:17.617 ======================================================== 00:18:17.617 Total : 55809.28 218.00 4589.42 1141.66 44658.80 00:18:17.617 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:17.617 rmmod nvme_tcp 00:18:17.617 rmmod nvme_fabrics 00:18:17.617 rmmod nvme_keyring 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3760931 ']' 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3760931 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3760931 ']' 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3760931 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.617 19:24:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3760931 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3760931' 00:18:17.617 killing process with pid 3760931 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3760931 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3760931 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:17.617 19:24:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:19.525 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:19.525 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:18:19.525 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:18:19.525 19:24:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:18:20.903 19:24:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:18:22.808 19:24:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:28.086 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:28.086 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:28.086 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:28.087 Found net devices under 0000:31:00.0: cvl_0_0 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:28.087 Found net devices under 0000:31:00.1: cvl_0_1 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:28.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.438 ms 00:18:28.087 00:18:28.087 --- 10.0.0.2 ping statistics --- 00:18:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.087 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:18:28.087 00:18:28.087 --- 10.0.0.1 ping statistics --- 00:18:28.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.087 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:18:28.087 net.core.busy_poll = 1 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:18:28.087 net.core.busy_read = 1 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:18:28.087 19:25:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3766147 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3766147 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3766147 ']' 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.348 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:28.348 [2024-11-26 19:25:02.092522] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:18:28.348 [2024-11-26 19:25:02.092571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.348 [2024-11-26 19:25:02.176693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.608 [2024-11-26 19:25:02.213755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.608 [2024-11-26 19:25:02.213786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.608 [2024-11-26 19:25:02.213794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.608 [2024-11-26 19:25:02.213802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.608 [2024-11-26 19:25:02.213807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.608 [2024-11-26 19:25:02.215379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.608 [2024-11-26 19:25:02.215531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.608 [2024-11-26 19:25:02.215678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.608 [2024-11-26 19:25:02.215679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.176 19:25:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.176 [2024-11-26 19:25:02.998564] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.176 Malloc1 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.176 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.437 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.437 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.437 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.437 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:29.437 [2024-11-26 19:25:03.048066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.437 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.437 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3766474 00:18:29.437 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:18:29.437 19:25:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:31.343 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:18:31.343 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.343 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:31.343 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.343 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:18:31.343 "tick_rate": 2400000000, 00:18:31.343 "poll_groups": [ 00:18:31.343 { 00:18:31.343 "name": "nvmf_tgt_poll_group_000", 00:18:31.343 "admin_qpairs": 1, 00:18:31.343 "io_qpairs": 1, 00:18:31.343 "current_admin_qpairs": 1, 00:18:31.343 "current_io_qpairs": 1, 00:18:31.343 "pending_bdev_io": 0, 00:18:31.343 "completed_nvme_io": 24644, 00:18:31.343 "transports": [ 00:18:31.343 { 00:18:31.343 "trtype": "TCP" 00:18:31.343 } 00:18:31.343 ] 00:18:31.343 }, 00:18:31.343 { 00:18:31.343 "name": "nvmf_tgt_poll_group_001", 00:18:31.343 "admin_qpairs": 0, 00:18:31.343 "io_qpairs": 3, 00:18:31.343 "current_admin_qpairs": 0, 00:18:31.343 "current_io_qpairs": 3, 00:18:31.343 "pending_bdev_io": 0, 00:18:31.343 "completed_nvme_io": 40749, 00:18:31.343 "transports": [ 00:18:31.343 { 00:18:31.343 "trtype": "TCP" 00:18:31.343 } 00:18:31.343 ] 00:18:31.343 }, 00:18:31.343 { 00:18:31.343 "name": "nvmf_tgt_poll_group_002", 00:18:31.343 "admin_qpairs": 0, 00:18:31.343 "io_qpairs": 0, 00:18:31.344 "current_admin_qpairs": 0, 00:18:31.344 "current_io_qpairs": 0, 00:18:31.344 "pending_bdev_io": 0, 00:18:31.344 "completed_nvme_io": 0, 00:18:31.344 "transports": [ 00:18:31.344 { 00:18:31.344 "trtype": "TCP" 00:18:31.344 } 00:18:31.344 ] 00:18:31.344 }, 00:18:31.344 { 00:18:31.344 "name": "nvmf_tgt_poll_group_003", 00:18:31.344 "admin_qpairs": 0, 00:18:31.344 "io_qpairs": 0, 00:18:31.344 "current_admin_qpairs": 0, 00:18:31.344 "current_io_qpairs": 0, 00:18:31.344 "pending_bdev_io": 0, 00:18:31.344 "completed_nvme_io": 0, 00:18:31.344 "transports": [ 00:18:31.344 { 00:18:31.344 "trtype": "TCP" 00:18:31.344 } 00:18:31.344 ] 00:18:31.344 } 00:18:31.344 ] 00:18:31.344 }' 00:18:31.344 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:18:31.344 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:18:31.344 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:18:31.344 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:18:31.344 19:25:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3766474 00:18:39.485 Initializing NVMe Controllers 00:18:39.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:39.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:18:39.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:18:39.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:18:39.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:18:39.485 Initialization complete. Launching workers. 00:18:39.485 ======================================================== 00:18:39.485 Latency(us) 00:18:39.486 Device Information : IOPS MiB/s Average min max 00:18:39.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7560.00 29.53 8467.47 1265.55 52661.70 00:18:39.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13484.70 52.67 4760.60 994.19 46033.01 00:18:39.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7495.90 29.28 8565.37 1062.67 53752.12 00:18:39.486 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6965.40 27.21 9205.97 859.67 52732.97 00:18:39.486 ======================================================== 00:18:39.486 Total : 35506.00 138.70 7225.20 859.67 53752.12 00:18:39.486 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.486 rmmod nvme_tcp 00:18:39.486 rmmod nvme_fabrics 00:18:39.486 rmmod nvme_keyring 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3766147 ']' 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3766147 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3766147 ']' 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3766147 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3766147 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3766147' 00:18:39.486 killing process with pid 3766147 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3766147 00:18:39.486 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3766147 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.745 19:25:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:18:43.037 00:18:43.037 real 0m51.127s 00:18:43.037 user 2m48.082s 00:18:43.037 sys 0m9.470s 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:43.037 ************************************ 00:18:43.037 END TEST nvmf_perf_adq 00:18:43.037 ************************************ 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.037 ************************************ 00:18:43.037 START TEST nvmf_shutdown 00:18:43.037 ************************************ 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:18:43.037 * Looking for test storage... 00:18:43.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.037 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.038 --rc genhtml_branch_coverage=1 00:18:43.038 --rc genhtml_function_coverage=1 00:18:43.038 --rc genhtml_legend=1 00:18:43.038 --rc geninfo_all_blocks=1 00:18:43.038 --rc geninfo_unexecuted_blocks=1 00:18:43.038 00:18:43.038 ' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.038 --rc genhtml_branch_coverage=1 00:18:43.038 --rc genhtml_function_coverage=1 00:18:43.038 --rc genhtml_legend=1 00:18:43.038 --rc geninfo_all_blocks=1 00:18:43.038 --rc geninfo_unexecuted_blocks=1 00:18:43.038 00:18:43.038 ' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.038 --rc genhtml_branch_coverage=1 00:18:43.038 --rc genhtml_function_coverage=1 00:18:43.038 --rc genhtml_legend=1 00:18:43.038 --rc geninfo_all_blocks=1 00:18:43.038 --rc geninfo_unexecuted_blocks=1 00:18:43.038 00:18:43.038 ' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.038 --rc genhtml_branch_coverage=1 00:18:43.038 --rc genhtml_function_coverage=1 00:18:43.038 --rc genhtml_legend=1 00:18:43.038 --rc geninfo_all_blocks=1 00:18:43.038 --rc geninfo_unexecuted_blocks=1 00:18:43.038 00:18:43.038 ' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:43.038 ************************************ 00:18:43.038 START TEST nvmf_shutdown_tc1 00:18:43.038 ************************************ 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:43.038 19:25:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:48.314 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:48.314 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.314 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:48.315 Found net devices under 0000:31:00.0: cvl_0_0 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:48.315 Found net devices under 0000:31:00.1: cvl_0_1 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:48.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:18:48.315 00:18:48.315 --- 10.0.0.2 ping statistics --- 00:18:48.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.315 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:48.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:18:48.315 00:18:48.315 --- 10.0.0.1 ping statistics --- 00:18:48.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.315 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3773324 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3773324 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3773324 ']' 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:48.315 19:25:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:48.315 [2024-11-26 19:25:21.997051] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:18:48.315 [2024-11-26 19:25:21.997109] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:48.315 [2024-11-26 19:25:22.068858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:48.315 [2024-11-26 19:25:22.098838] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.315 [2024-11-26 19:25:22.098865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.315 [2024-11-26 19:25:22.098871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.315 [2024-11-26 19:25:22.098875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.315 [2024-11-26 19:25:22.098880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.315 [2024-11-26 19:25:22.100385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.315 [2024-11-26 19:25:22.100541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.315 [2024-11-26 19:25:22.100698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.315 [2024-11-26 19:25:22.100700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:49.252 [2024-11-26 19:25:22.806432] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.252 19:25:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:49.252 Malloc1 00:18:49.252 [2024-11-26 19:25:22.892735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.252 Malloc2 00:18:49.252 Malloc3 00:18:49.252 Malloc4 00:18:49.252 Malloc5 00:18:49.252 Malloc6 00:18:49.252 Malloc7 00:18:49.514 Malloc8 00:18:49.514 Malloc9 00:18:49.514 Malloc10 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3773655 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3773655 /var/tmp/bdevperf.sock 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3773655 ']' 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.514 { 00:18:49.514 "params": { 00:18:49.514 "name": "Nvme$subsystem", 00:18:49.514 "trtype": "$TEST_TRANSPORT", 00:18:49.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.514 "adrfam": "ipv4", 00:18:49.514 "trsvcid": "$NVMF_PORT", 00:18:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.514 "hdgst": ${hdgst:-false}, 00:18:49.514 "ddgst": ${ddgst:-false} 00:18:49.514 }, 00:18:49.514 "method": "bdev_nvme_attach_controller" 00:18:49.514 } 00:18:49.514 EOF 00:18:49.514 )") 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.514 { 00:18:49.514 "params": { 00:18:49.514 "name": "Nvme$subsystem", 00:18:49.514 "trtype": "$TEST_TRANSPORT", 00:18:49.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.514 "adrfam": "ipv4", 00:18:49.514 "trsvcid": "$NVMF_PORT", 00:18:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.514 "hdgst": ${hdgst:-false}, 00:18:49.514 "ddgst": ${ddgst:-false} 00:18:49.514 }, 00:18:49.514 "method": "bdev_nvme_attach_controller" 00:18:49.514 } 00:18:49.514 EOF 00:18:49.514 )") 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.514 { 00:18:49.514 "params": { 00:18:49.514 "name": "Nvme$subsystem", 00:18:49.514 "trtype": "$TEST_TRANSPORT", 00:18:49.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.514 "adrfam": "ipv4", 00:18:49.514 "trsvcid": "$NVMF_PORT", 00:18:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.514 "hdgst": ${hdgst:-false}, 00:18:49.514 "ddgst": ${ddgst:-false} 00:18:49.514 }, 00:18:49.514 "method": "bdev_nvme_attach_controller" 00:18:49.514 } 00:18:49.514 EOF 00:18:49.514 )") 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.514 { 00:18:49.514 "params": { 00:18:49.514 "name": "Nvme$subsystem", 00:18:49.514 "trtype": "$TEST_TRANSPORT", 00:18:49.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.514 "adrfam": "ipv4", 00:18:49.514 "trsvcid": "$NVMF_PORT", 00:18:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.514 "hdgst": ${hdgst:-false}, 00:18:49.514 "ddgst": ${ddgst:-false} 00:18:49.514 }, 00:18:49.514 "method": "bdev_nvme_attach_controller" 00:18:49.514 } 00:18:49.514 EOF 00:18:49.514 )") 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.514 { 00:18:49.514 "params": { 00:18:49.514 "name": "Nvme$subsystem", 00:18:49.514 "trtype": "$TEST_TRANSPORT", 00:18:49.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.514 "adrfam": "ipv4", 00:18:49.514 "trsvcid": "$NVMF_PORT", 00:18:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.514 "hdgst": ${hdgst:-false}, 00:18:49.514 "ddgst": ${ddgst:-false} 00:18:49.514 }, 00:18:49.514 "method": "bdev_nvme_attach_controller" 00:18:49.514 } 00:18:49.514 EOF 00:18:49.514 )") 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.514 { 00:18:49.514 "params": { 00:18:49.514 "name": "Nvme$subsystem", 00:18:49.514 "trtype": "$TEST_TRANSPORT", 00:18:49.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.514 "adrfam": "ipv4", 00:18:49.514 "trsvcid": "$NVMF_PORT", 00:18:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.514 "hdgst": ${hdgst:-false}, 00:18:49.514 "ddgst": ${ddgst:-false} 00:18:49.514 }, 00:18:49.514 "method": "bdev_nvme_attach_controller" 00:18:49.514 } 00:18:49.514 EOF 00:18:49.514 )") 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.514 { 00:18:49.514 "params": { 00:18:49.514 "name": "Nvme$subsystem", 00:18:49.514 "trtype": "$TEST_TRANSPORT", 00:18:49.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.514 "adrfam": "ipv4", 00:18:49.514 "trsvcid": "$NVMF_PORT", 00:18:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.514 "hdgst": ${hdgst:-false}, 00:18:49.514 "ddgst": ${ddgst:-false} 00:18:49.514 }, 00:18:49.514 "method": "bdev_nvme_attach_controller" 00:18:49.514 } 00:18:49.514 EOF 00:18:49.514 )") 00:18:49.514 [2024-11-26 19:25:23.302936] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:18:49.514 [2024-11-26 19:25:23.302992] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.514 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.514 { 00:18:49.514 "params": { 00:18:49.514 "name": "Nvme$subsystem", 00:18:49.514 "trtype": "$TEST_TRANSPORT", 00:18:49.514 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.514 "adrfam": "ipv4", 00:18:49.514 "trsvcid": "$NVMF_PORT", 00:18:49.514 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.514 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.514 "hdgst": ${hdgst:-false}, 00:18:49.514 "ddgst": ${ddgst:-false} 00:18:49.514 }, 00:18:49.514 "method": "bdev_nvme_attach_controller" 00:18:49.514 } 00:18:49.514 EOF 00:18:49.515 )") 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.515 { 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme$subsystem", 00:18:49.515 "trtype": "$TEST_TRANSPORT", 00:18:49.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "$NVMF_PORT", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.515 "hdgst": ${hdgst:-false}, 00:18:49.515 "ddgst": ${ddgst:-false} 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 } 00:18:49.515 EOF 00:18:49.515 )") 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:49.515 { 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme$subsystem", 00:18:49.515 "trtype": "$TEST_TRANSPORT", 00:18:49.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "$NVMF_PORT", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:49.515 "hdgst": ${hdgst:-false}, 00:18:49.515 "ddgst": ${ddgst:-false} 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 } 00:18:49.515 EOF 00:18:49.515 )") 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:18:49.515 19:25:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme1", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme2", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme3", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme4", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme5", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme6", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme7", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme8", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme9", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 },{ 00:18:49.515 "params": { 00:18:49.515 "name": "Nvme10", 00:18:49.515 "trtype": "tcp", 00:18:49.515 "traddr": "10.0.0.2", 00:18:49.515 "adrfam": "ipv4", 00:18:49.515 "trsvcid": "4420", 00:18:49.515 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:49.515 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:49.515 "hdgst": false, 00:18:49.515 "ddgst": false 00:18:49.515 }, 00:18:49.515 "method": "bdev_nvme_attach_controller" 00:18:49.515 }' 00:18:49.775 [2024-11-26 19:25:23.382378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.775 [2024-11-26 19:25:23.418735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3773655 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:18:51.240 19:25:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:18:52.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3773655 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3773324 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.197 [2024-11-26 19:25:25.747837] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:18:52.197 [2024-11-26 19:25:25.747891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774352 ] 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.197 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.197 { 00:18:52.197 "params": { 00:18:52.197 "name": "Nvme$subsystem", 00:18:52.197 "trtype": "$TEST_TRANSPORT", 00:18:52.197 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.197 "adrfam": "ipv4", 00:18:52.197 "trsvcid": "$NVMF_PORT", 00:18:52.197 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.197 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.197 "hdgst": ${hdgst:-false}, 00:18:52.197 "ddgst": ${ddgst:-false} 00:18:52.197 }, 00:18:52.197 "method": "bdev_nvme_attach_controller" 00:18:52.197 } 00:18:52.197 EOF 00:18:52.197 )") 00:18:52.198 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.198 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:52.198 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:52.198 { 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme$subsystem", 00:18:52.198 "trtype": "$TEST_TRANSPORT", 00:18:52.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "$NVMF_PORT", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:52.198 "hdgst": ${hdgst:-false}, 00:18:52.198 "ddgst": ${ddgst:-false} 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 } 00:18:52.198 EOF 00:18:52.198 )") 00:18:52.198 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:18:52.198 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:18:52.198 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:18:52.198 19:25:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme1", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme2", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme3", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme4", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme5", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme6", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme7", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme8", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme9", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 },{ 00:18:52.198 "params": { 00:18:52.198 "name": "Nvme10", 00:18:52.198 "trtype": "tcp", 00:18:52.198 "traddr": "10.0.0.2", 00:18:52.198 "adrfam": "ipv4", 00:18:52.198 "trsvcid": "4420", 00:18:52.198 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:52.198 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:52.198 "hdgst": false, 00:18:52.198 "ddgst": false 00:18:52.198 }, 00:18:52.198 "method": "bdev_nvme_attach_controller" 00:18:52.198 }' 00:18:52.198 [2024-11-26 19:25:25.826421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.198 [2024-11-26 19:25:25.863702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.579 Running I/O for 1 seconds... 00:18:54.517 2130.00 IOPS, 133.12 MiB/s 00:18:54.517 Latency(us) 00:18:54.517 [2024-11-26T18:25:28.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.517 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.517 Verification LBA range: start 0x0 length 0x400 00:18:54.517 Nvme1n1 : 1.08 239.90 14.99 0.00 0.00 259488.95 18677.76 232434.35 00:18:54.517 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.517 Verification LBA range: start 0x0 length 0x400 00:18:54.517 Nvme2n1 : 1.09 235.55 14.72 0.00 0.00 264282.03 16384.00 248162.99 00:18:54.517 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.517 Verification LBA range: start 0x0 length 0x400 00:18:54.517 Nvme3n1 : 1.15 278.07 17.38 0.00 0.00 220171.43 20862.29 246415.36 00:18:54.517 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.517 Verification LBA range: start 0x0 length 0x400 00:18:54.517 Nvme4n1 : 1.13 283.92 17.75 0.00 0.00 210968.41 9448.11 281367.89 00:18:54.517 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.517 Verification LBA range: start 0x0 length 0x400 00:18:54.517 Nvme5n1 : 1.15 281.16 17.57 0.00 0.00 209340.79 2143.57 237677.23 00:18:54.517 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.517 Verification LBA range: start 0x0 length 0x400 00:18:54.517 Nvme6n1 : 1.17 273.37 17.09 0.00 0.00 212225.71 16056.32 262144.00 00:18:54.517 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.517 Verification LBA range: start 0x0 length 0x400 00:18:54.518 Nvme7n1 : 1.17 274.93 17.18 0.00 0.00 199033.21 15619.41 249910.61 00:18:54.518 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.518 Verification LBA range: start 0x0 length 0x400 00:18:54.518 Nvme8n1 : 1.16 275.84 17.24 0.00 0.00 203265.88 15947.09 221074.77 00:18:54.518 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.518 Verification LBA range: start 0x0 length 0x400 00:18:54.518 Nvme9n1 : 1.18 271.51 16.97 0.00 0.00 203242.15 14964.05 248162.99 00:18:54.518 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:54.518 Verification LBA range: start 0x0 length 0x400 00:18:54.518 Nvme10n1 : 1.18 284.73 17.80 0.00 0.00 189374.27 3126.61 270882.13 00:18:54.518 [2024-11-26T18:25:28.383Z] =================================================================================================================== 00:18:54.518 [2024-11-26T18:25:28.383Z] Total : 2698.99 168.69 0.00 0.00 215181.92 2143.57 281367.89 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:54.777 rmmod nvme_tcp 00:18:54.777 rmmod nvme_fabrics 00:18:54.777 rmmod nvme_keyring 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3773324 ']' 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3773324 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3773324 ']' 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3773324 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3773324 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3773324' 00:18:54.777 killing process with pid 3773324 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3773324 00:18:54.777 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3773324 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:55.037 19:25:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.940 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:56.940 00:18:56.940 real 0m14.076s 00:18:56.940 user 0m31.670s 00:18:56.940 sys 0m4.864s 00:18:56.940 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.940 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:18:56.940 ************************************ 00:18:56.940 END TEST nvmf_shutdown_tc1 00:18:56.940 ************************************ 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:18:57.200 ************************************ 00:18:57.200 START TEST nvmf_shutdown_tc2 00:18:57.200 ************************************ 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:57.200 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:57.200 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.200 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:57.201 Found net devices under 0000:31:00.0: cvl_0_0 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:57.201 Found net devices under 0000:31:00.1: cvl_0_1 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:57.201 19:25:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:57.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:18:57.201 00:18:57.201 --- 10.0.0.2 ping statistics --- 00:18:57.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.201 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:57.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:18:57.201 00:18:57.201 --- 10.0.0.1 ping statistics --- 00:18:57.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.201 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:57.201 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3775540 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3775540 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3775540 ']' 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.461 [2024-11-26 19:25:31.127609] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:18:57.461 [2024-11-26 19:25:31.127658] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.461 [2024-11-26 19:25:31.202957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:57.461 [2024-11-26 19:25:31.232816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.461 [2024-11-26 19:25:31.232844] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.461 [2024-11-26 19:25:31.232850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.461 [2024-11-26 19:25:31.232855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.461 [2024-11-26 19:25:31.232859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.461 [2024-11-26 19:25:31.234170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.461 [2024-11-26 19:25:31.234183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:57.461 [2024-11-26 19:25:31.234313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.461 [2024-11-26 19:25:31.234315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.461 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.721 [2024-11-26 19:25:31.338600] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.721 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.721 Malloc1 00:18:57.721 [2024-11-26 19:25:31.428770] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:57.721 Malloc2 00:18:57.721 Malloc3 00:18:57.721 Malloc4 00:18:57.721 Malloc5 00:18:57.981 Malloc6 00:18:57.981 Malloc7 00:18:57.981 Malloc8 00:18:57.981 Malloc9 00:18:57.981 Malloc10 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3775840 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3775840 /var/tmp/bdevperf.sock 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3775840 ']' 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:57.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.981 { 00:18:57.981 "params": { 00:18:57.981 "name": "Nvme$subsystem", 00:18:57.981 "trtype": "$TEST_TRANSPORT", 00:18:57.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.981 "adrfam": "ipv4", 00:18:57.981 "trsvcid": "$NVMF_PORT", 00:18:57.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.981 "hdgst": ${hdgst:-false}, 00:18:57.981 "ddgst": ${ddgst:-false} 00:18:57.981 }, 00:18:57.981 "method": "bdev_nvme_attach_controller" 00:18:57.981 } 00:18:57.981 EOF 00:18:57.981 )") 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.981 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.981 { 00:18:57.981 "params": { 00:18:57.981 "name": "Nvme$subsystem", 00:18:57.981 "trtype": "$TEST_TRANSPORT", 00:18:57.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.981 "adrfam": "ipv4", 00:18:57.981 "trsvcid": "$NVMF_PORT", 00:18:57.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.981 "hdgst": ${hdgst:-false}, 00:18:57.982 "ddgst": ${ddgst:-false} 00:18:57.982 }, 00:18:57.982 "method": "bdev_nvme_attach_controller" 00:18:57.982 } 00:18:57.982 EOF 00:18:57.982 )") 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.982 { 00:18:57.982 "params": { 00:18:57.982 "name": "Nvme$subsystem", 00:18:57.982 "trtype": "$TEST_TRANSPORT", 00:18:57.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.982 "adrfam": "ipv4", 00:18:57.982 "trsvcid": "$NVMF_PORT", 00:18:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.982 "hdgst": ${hdgst:-false}, 00:18:57.982 "ddgst": ${ddgst:-false} 00:18:57.982 }, 00:18:57.982 "method": "bdev_nvme_attach_controller" 00:18:57.982 } 00:18:57.982 EOF 00:18:57.982 )") 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.982 { 00:18:57.982 "params": { 00:18:57.982 "name": "Nvme$subsystem", 00:18:57.982 "trtype": "$TEST_TRANSPORT", 00:18:57.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.982 "adrfam": "ipv4", 00:18:57.982 "trsvcid": "$NVMF_PORT", 00:18:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.982 "hdgst": ${hdgst:-false}, 00:18:57.982 "ddgst": ${ddgst:-false} 00:18:57.982 }, 00:18:57.982 "method": "bdev_nvme_attach_controller" 00:18:57.982 } 00:18:57.982 EOF 00:18:57.982 )") 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.982 { 00:18:57.982 "params": { 00:18:57.982 "name": "Nvme$subsystem", 00:18:57.982 "trtype": "$TEST_TRANSPORT", 00:18:57.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.982 "adrfam": "ipv4", 00:18:57.982 "trsvcid": "$NVMF_PORT", 00:18:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.982 "hdgst": ${hdgst:-false}, 00:18:57.982 "ddgst": ${ddgst:-false} 00:18:57.982 }, 00:18:57.982 "method": "bdev_nvme_attach_controller" 00:18:57.982 } 00:18:57.982 EOF 00:18:57.982 )") 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.982 { 00:18:57.982 "params": { 00:18:57.982 "name": "Nvme$subsystem", 00:18:57.982 "trtype": "$TEST_TRANSPORT", 00:18:57.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.982 "adrfam": "ipv4", 00:18:57.982 "trsvcid": "$NVMF_PORT", 00:18:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.982 "hdgst": ${hdgst:-false}, 00:18:57.982 "ddgst": ${ddgst:-false} 00:18:57.982 }, 00:18:57.982 "method": "bdev_nvme_attach_controller" 00:18:57.982 } 00:18:57.982 EOF 00:18:57.982 )") 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.982 { 00:18:57.982 "params": { 00:18:57.982 "name": "Nvme$subsystem", 00:18:57.982 "trtype": "$TEST_TRANSPORT", 00:18:57.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.982 "adrfam": "ipv4", 00:18:57.982 "trsvcid": "$NVMF_PORT", 00:18:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.982 "hdgst": ${hdgst:-false}, 00:18:57.982 "ddgst": ${ddgst:-false} 00:18:57.982 }, 00:18:57.982 "method": "bdev_nvme_attach_controller" 00:18:57.982 } 00:18:57.982 EOF 00:18:57.982 )") 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:57.982 [2024-11-26 19:25:31.843731] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:18:57.982 [2024-11-26 19:25:31.843782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3775840 ] 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:57.982 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:57.982 { 00:18:57.982 "params": { 00:18:57.982 "name": "Nvme$subsystem", 00:18:57.982 "trtype": "$TEST_TRANSPORT", 00:18:57.982 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:57.982 "adrfam": "ipv4", 00:18:57.982 "trsvcid": "$NVMF_PORT", 00:18:57.982 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:57.982 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:57.982 "hdgst": ${hdgst:-false}, 00:18:57.982 "ddgst": ${ddgst:-false} 00:18:57.982 }, 00:18:57.982 "method": "bdev_nvme_attach_controller" 00:18:57.982 } 00:18:57.982 EOF 00:18:57.982 )") 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:58.242 { 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme$subsystem", 00:18:58.242 "trtype": "$TEST_TRANSPORT", 00:18:58.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "$NVMF_PORT", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.242 "hdgst": ${hdgst:-false}, 00:18:58.242 "ddgst": ${ddgst:-false} 00:18:58.242 }, 00:18:58.242 "method": "bdev_nvme_attach_controller" 00:18:58.242 } 00:18:58.242 EOF 00:18:58.242 )") 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:58.242 { 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme$subsystem", 00:18:58.242 "trtype": "$TEST_TRANSPORT", 00:18:58.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "$NVMF_PORT", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:58.242 "hdgst": ${hdgst:-false}, 00:18:58.242 "ddgst": ${ddgst:-false} 00:18:58.242 }, 00:18:58.242 "method": "bdev_nvme_attach_controller" 00:18:58.242 } 00:18:58.242 EOF 00:18:58.242 )") 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:18:58.242 19:25:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme1", 00:18:58.242 "trtype": "tcp", 00:18:58.242 "traddr": "10.0.0.2", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "4420", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.242 "hdgst": false, 00:18:58.242 "ddgst": false 00:18:58.242 }, 00:18:58.242 "method": "bdev_nvme_attach_controller" 00:18:58.242 },{ 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme2", 00:18:58.242 "trtype": "tcp", 00:18:58.242 "traddr": "10.0.0.2", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "4420", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:58.242 "hdgst": false, 00:18:58.242 "ddgst": false 00:18:58.242 }, 00:18:58.242 "method": "bdev_nvme_attach_controller" 00:18:58.242 },{ 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme3", 00:18:58.242 "trtype": "tcp", 00:18:58.242 "traddr": "10.0.0.2", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "4420", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:18:58.242 "hdgst": false, 00:18:58.242 "ddgst": false 00:18:58.242 }, 00:18:58.242 "method": "bdev_nvme_attach_controller" 00:18:58.242 },{ 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme4", 00:18:58.242 "trtype": "tcp", 00:18:58.242 "traddr": "10.0.0.2", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "4420", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:18:58.242 "hdgst": false, 00:18:58.242 "ddgst": false 00:18:58.242 }, 00:18:58.242 "method": "bdev_nvme_attach_controller" 00:18:58.242 },{ 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme5", 00:18:58.242 "trtype": "tcp", 00:18:58.242 "traddr": "10.0.0.2", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "4420", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:18:58.242 "hdgst": false, 00:18:58.242 "ddgst": false 00:18:58.242 }, 00:18:58.242 "method": "bdev_nvme_attach_controller" 00:18:58.242 },{ 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme6", 00:18:58.242 "trtype": "tcp", 00:18:58.242 "traddr": "10.0.0.2", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "4420", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:18:58.242 "hdgst": false, 00:18:58.242 "ddgst": false 00:18:58.242 }, 00:18:58.242 "method": "bdev_nvme_attach_controller" 00:18:58.242 },{ 00:18:58.242 "params": { 00:18:58.242 "name": "Nvme7", 00:18:58.242 "trtype": "tcp", 00:18:58.242 "traddr": "10.0.0.2", 00:18:58.242 "adrfam": "ipv4", 00:18:58.242 "trsvcid": "4420", 00:18:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:18:58.242 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:18:58.242 "hdgst": false, 00:18:58.242 "ddgst": false 00:18:58.243 }, 00:18:58.243 "method": "bdev_nvme_attach_controller" 00:18:58.243 },{ 00:18:58.243 "params": { 00:18:58.243 "name": "Nvme8", 00:18:58.243 "trtype": "tcp", 00:18:58.243 "traddr": "10.0.0.2", 00:18:58.243 "adrfam": "ipv4", 00:18:58.243 "trsvcid": "4420", 00:18:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:18:58.243 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:18:58.243 "hdgst": false, 00:18:58.243 "ddgst": false 00:18:58.243 }, 00:18:58.243 "method": "bdev_nvme_attach_controller" 00:18:58.243 },{ 00:18:58.243 "params": { 00:18:58.243 "name": "Nvme9", 00:18:58.243 "trtype": "tcp", 00:18:58.243 "traddr": "10.0.0.2", 00:18:58.243 "adrfam": "ipv4", 00:18:58.243 "trsvcid": "4420", 00:18:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:18:58.243 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:18:58.243 "hdgst": false, 00:18:58.243 "ddgst": false 00:18:58.243 }, 00:18:58.243 "method": "bdev_nvme_attach_controller" 00:18:58.243 },{ 00:18:58.243 "params": { 00:18:58.243 "name": "Nvme10", 00:18:58.243 "trtype": "tcp", 00:18:58.243 "traddr": "10.0.0.2", 00:18:58.243 "adrfam": "ipv4", 00:18:58.243 "trsvcid": "4420", 00:18:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:18:58.243 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:18:58.243 "hdgst": false, 00:18:58.243 "ddgst": false 00:18:58.243 }, 00:18:58.243 "method": "bdev_nvme_attach_controller" 00:18:58.243 }' 00:18:58.243 [2024-11-26 19:25:31.909078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.243 [2024-11-26 19:25:31.939154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.624 Running I/O for 10 seconds... 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:18:59.885 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:18:59.886 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:00.144 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:00.144 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:00.144 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:00.144 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.144 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:00.144 19:25:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:00.144 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.402 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:19:00.402 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:19:00.402 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:19:00.402 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3775840 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3775840 ']' 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3775840 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3775840 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3775840' 00:19:00.403 killing process with pid 3775840 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3775840 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3775840 00:19:00.403 Received shutdown signal, test time was about 0.732621 seconds 00:19:00.403 00:19:00.403 Latency(us) 00:19:00.403 [2024-11-26T18:25:34.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.403 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme1n1 : 0.72 266.04 16.63 0.00 0.00 237924.69 22282.24 228939.09 00:19:00.403 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme2n1 : 0.71 282.10 17.63 0.00 0.00 216782.19 3877.55 227191.47 00:19:00.403 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme3n1 : 0.71 269.15 16.82 0.00 0.00 224651.09 21517.65 212336.64 00:19:00.403 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme4n1 : 0.71 269.80 16.86 0.00 0.00 221453.37 19114.67 232434.35 00:19:00.403 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme5n1 : 0.73 264.25 16.52 0.00 0.00 221586.20 18240.85 235929.60 00:19:00.403 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme6n1 : 0.71 271.33 16.96 0.00 0.00 209994.81 19223.89 232434.35 00:19:00.403 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme7n1 : 0.73 351.27 21.95 0.00 0.00 160353.92 20425.39 210589.01 00:19:00.403 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme8n1 : 0.72 267.76 16.73 0.00 0.00 205829.69 18350.08 230686.72 00:19:00.403 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme9n1 : 0.73 262.29 16.39 0.00 0.00 206475.38 16820.91 253405.87 00:19:00.403 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:00.403 Verification LBA range: start 0x0 length 0x400 00:19:00.403 Nvme10n1 : 0.72 264.83 16.55 0.00 0.00 199830.76 19770.03 232434.35 00:19:00.403 [2024-11-26T18:25:34.268Z] =================================================================================================================== 00:19:00.403 [2024-11-26T18:25:34.268Z] Total : 2768.82 173.05 0.00 0.00 208906.70 3877.55 253405.87 00:19:00.403 19:25:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3775540 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:01.779 rmmod nvme_tcp 00:19:01.779 rmmod nvme_fabrics 00:19:01.779 rmmod nvme_keyring 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3775540 ']' 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3775540 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3775540 ']' 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3775540 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3775540 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3775540' 00:19:01.779 killing process with pid 3775540 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3775540 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3775540 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.779 19:25:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:04.317 00:19:04.317 real 0m6.816s 00:19:04.317 user 0m19.808s 00:19:04.317 sys 0m0.932s 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:04.317 ************************************ 00:19:04.317 END TEST nvmf_shutdown_tc2 00:19:04.317 ************************************ 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:04.317 ************************************ 00:19:04.317 START TEST nvmf_shutdown_tc3 00:19:04.317 ************************************ 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:04.317 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:04.318 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:04.318 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:04.318 Found net devices under 0000:31:00.0: cvl_0_0 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:04.318 Found net devices under 0000:31:00.1: cvl_0_1 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:04.318 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:04.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:19:04.319 00:19:04.319 --- 10.0.0.2 ping statistics --- 00:19:04.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.319 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:19:04.319 00:19:04.319 --- 10.0.0.1 ping statistics --- 00:19:04.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.319 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3777297 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3777297 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3777297 ']' 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.319 19:25:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:04.319 [2024-11-26 19:25:37.998446] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:19:04.319 [2024-11-26 19:25:37.998494] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.319 [2024-11-26 19:25:38.069955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:04.319 [2024-11-26 19:25:38.099634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.319 [2024-11-26 19:25:38.099660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.319 [2024-11-26 19:25:38.099665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.319 [2024-11-26 19:25:38.099670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.319 [2024-11-26 19:25:38.099674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.319 [2024-11-26 19:25:38.100878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.319 [2024-11-26 19:25:38.101033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.319 [2024-11-26 19:25:38.101164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.319 [2024-11-26 19:25:38.101166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 [2024-11-26 19:25:38.802716] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.257 19:25:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:05.257 Malloc1 00:19:05.257 [2024-11-26 19:25:38.896786] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.257 Malloc2 00:19:05.257 Malloc3 00:19:05.257 Malloc4 00:19:05.257 Malloc5 00:19:05.257 Malloc6 00:19:05.257 Malloc7 00:19:05.516 Malloc8 00:19:05.516 Malloc9 00:19:05.516 Malloc10 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3777678 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3777678 /var/tmp/bdevperf.sock 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3777678 ']' 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.516 { 00:19:05.516 "params": { 00:19:05.516 "name": "Nvme$subsystem", 00:19:05.516 "trtype": "$TEST_TRANSPORT", 00:19:05.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.516 "adrfam": "ipv4", 00:19:05.516 "trsvcid": "$NVMF_PORT", 00:19:05.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.516 "hdgst": ${hdgst:-false}, 00:19:05.516 "ddgst": ${ddgst:-false} 00:19:05.516 }, 00:19:05.516 "method": "bdev_nvme_attach_controller" 00:19:05.516 } 00:19:05.516 EOF 00:19:05.516 )") 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.516 { 00:19:05.516 "params": { 00:19:05.516 "name": "Nvme$subsystem", 00:19:05.516 "trtype": "$TEST_TRANSPORT", 00:19:05.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.516 "adrfam": "ipv4", 00:19:05.516 "trsvcid": "$NVMF_PORT", 00:19:05.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.516 "hdgst": ${hdgst:-false}, 00:19:05.516 "ddgst": ${ddgst:-false} 00:19:05.516 }, 00:19:05.516 "method": "bdev_nvme_attach_controller" 00:19:05.516 } 00:19:05.516 EOF 00:19:05.516 )") 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.516 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.516 { 00:19:05.516 "params": { 00:19:05.516 "name": "Nvme$subsystem", 00:19:05.516 "trtype": "$TEST_TRANSPORT", 00:19:05.516 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.516 "adrfam": "ipv4", 00:19:05.516 "trsvcid": "$NVMF_PORT", 00:19:05.516 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.516 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.516 "hdgst": ${hdgst:-false}, 00:19:05.516 "ddgst": ${ddgst:-false} 00:19:05.516 }, 00:19:05.516 "method": "bdev_nvme_attach_controller" 00:19:05.516 } 00:19:05.516 EOF 00:19:05.516 )") 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.517 { 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme$subsystem", 00:19:05.517 "trtype": "$TEST_TRANSPORT", 00:19:05.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "$NVMF_PORT", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.517 "hdgst": ${hdgst:-false}, 00:19:05.517 "ddgst": ${ddgst:-false} 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 } 00:19:05.517 EOF 00:19:05.517 )") 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.517 { 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme$subsystem", 00:19:05.517 "trtype": "$TEST_TRANSPORT", 00:19:05.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "$NVMF_PORT", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.517 "hdgst": ${hdgst:-false}, 00:19:05.517 "ddgst": ${ddgst:-false} 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 } 00:19:05.517 EOF 00:19:05.517 )") 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.517 { 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme$subsystem", 00:19:05.517 "trtype": "$TEST_TRANSPORT", 00:19:05.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "$NVMF_PORT", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.517 "hdgst": ${hdgst:-false}, 00:19:05.517 "ddgst": ${ddgst:-false} 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 } 00:19:05.517 EOF 00:19:05.517 )") 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.517 { 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme$subsystem", 00:19:05.517 "trtype": "$TEST_TRANSPORT", 00:19:05.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "$NVMF_PORT", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.517 "hdgst": ${hdgst:-false}, 00:19:05.517 "ddgst": ${ddgst:-false} 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 } 00:19:05.517 EOF 00:19:05.517 )") 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.517 [2024-11-26 19:25:39.310809] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:19:05.517 [2024-11-26 19:25:39.310865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3777678 ] 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.517 { 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme$subsystem", 00:19:05.517 "trtype": "$TEST_TRANSPORT", 00:19:05.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "$NVMF_PORT", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.517 "hdgst": ${hdgst:-false}, 00:19:05.517 "ddgst": ${ddgst:-false} 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 } 00:19:05.517 EOF 00:19:05.517 )") 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.517 { 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme$subsystem", 00:19:05.517 "trtype": "$TEST_TRANSPORT", 00:19:05.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "$NVMF_PORT", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.517 "hdgst": ${hdgst:-false}, 00:19:05.517 "ddgst": ${ddgst:-false} 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 } 00:19:05.517 EOF 00:19:05.517 )") 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:05.517 { 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme$subsystem", 00:19:05.517 "trtype": "$TEST_TRANSPORT", 00:19:05.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "$NVMF_PORT", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:05.517 "hdgst": ${hdgst:-false}, 00:19:05.517 "ddgst": ${ddgst:-false} 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 } 00:19:05.517 EOF 00:19:05.517 )") 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:19:05.517 19:25:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme1", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme2", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme3", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme4", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme5", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme6", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme7", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme8", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme9", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 },{ 00:19:05.517 "params": { 00:19:05.517 "name": "Nvme10", 00:19:05.517 "trtype": "tcp", 00:19:05.517 "traddr": "10.0.0.2", 00:19:05.517 "adrfam": "ipv4", 00:19:05.517 "trsvcid": "4420", 00:19:05.517 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:05.517 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:05.517 "hdgst": false, 00:19:05.517 "ddgst": false 00:19:05.517 }, 00:19:05.517 "method": "bdev_nvme_attach_controller" 00:19:05.517 }' 00:19:05.517 [2024-11-26 19:25:39.376364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.776 [2024-11-26 19:25:39.406866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.155 Running I/O for 10 seconds... 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:07.415 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=16 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 16 -ge 100 ']' 00:19:07.416 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3777297 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3777297 ']' 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3777297 00:19:07.684 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:19:07.685 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.685 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3777297 00:19:07.685 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.685 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.685 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3777297' 00:19:07.685 killing process with pid 3777297 00:19:07.685 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3777297 00:19:07.685 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3777297 00:19:07.685 [2024-11-26 19:25:41.516785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.516996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.517134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550050 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.685 [2024-11-26 19:25:41.518215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.518450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x141d380 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.686 [2024-11-26 19:25:41.519589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.519721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550540 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.520995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.687 [2024-11-26 19:25:41.521037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.521041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550a10 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.522561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551750 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.688 [2024-11-26 19:25:41.523456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.523658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1551ad0 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524391] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.689 [2024-11-26 19:25:41.524494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.524662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1552470 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.525138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5bb0 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.525237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a2a40 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.525305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dfed0 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.525367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93610 is same with the state(6) to be set 00:19:07.690 [2024-11-26 19:25:41.525439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.690 [2024-11-26 19:25:41.525475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.690 [2024-11-26 19:25:41.525480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7720 is same with the state(6) to be set 00:19:07.691 [2024-11-26 19:25:41.525506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75490 is same with the state(6) to be set 00:19:07.691 [2024-11-26 19:25:41.525566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67b10 is same with the state(6) to be set 00:19:07.691 [2024-11-26 19:25:41.525628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76080 is same with the state(6) to be set 00:19:07.691 [2024-11-26 19:25:41.525690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd74430 is same with the state(6) to be set 00:19:07.691 [2024-11-26 19:25:41.525750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:07.691 [2024-11-26 19:25:41.525790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.525796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64f40 is same with the state(6) to be set 00:19:07.691 [2024-11-26 19:25:41.526449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.691 [2024-11-26 19:25:41.526667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.691 [2024-11-26 19:25:41.526672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.526990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.526995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.692 [2024-11-26 19:25:41.527150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.692 [2024-11-26 19:25:41.527155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1183a30 is same with the state(6) to be set 00:19:07.693 [2024-11-26 19:25:41.527431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.693 [2024-11-26 19:25:41.527753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.693 [2024-11-26 19:25:41.527758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.527991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.527998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.694 [2024-11-26 19:25:41.528313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.694 [2024-11-26 19:25:41.528319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.528722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.528728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.532511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.532550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.532560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.532569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.532576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.532585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.532592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.532601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.532608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.532617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.695 [2024-11-26 19:25:41.532624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.695 [2024-11-26 19:25:41.532633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.532965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.532972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.696 [2024-11-26 19:25:41.533335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.696 [2024-11-26 19:25:41.533342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.697 [2024-11-26 19:25:41.533799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.697 [2024-11-26 19:25:41.533805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-11-26 19:25:41.533811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-11-26 19:25:41.533816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-11-26 19:25:41.533823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-11-26 19:25:41.533828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-11-26 19:25:41.533835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-11-26 19:25:41.533840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-11-26 19:25:41.533846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-11-26 19:25:41.533851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-11-26 19:25:41.533858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-11-26 19:25:41.533863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-11-26 19:25:41.533869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.698 [2024-11-26 19:25:41.533875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.698 [2024-11-26 19:25:41.537591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:19:07.698 [2024-11-26 19:25:41.537615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:19:07.698 [2024-11-26 19:25:41.537624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:19:07.698 [2024-11-26 19:25:41.537631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:19:07.698 [2024-11-26 19:25:41.537643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93610 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a2a40 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76080 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd64f40 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5bb0 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dfed0 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e7720 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75490 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd67b10 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.537745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd74430 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.538128] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.698 [2024-11-26 19:25:41.538173] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.698 [2024-11-26 19:25:41.538232] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.698 [2024-11-26 19:25:41.538262] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.698 [2024-11-26 19:25:41.538836] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.698 [2024-11-26 19:25:41.538867] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:19:07.698 [2024-11-26 19:25:41.539138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.698 [2024-11-26 19:25:41.539161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd64f40 with addr=10.0.0.2, port=4420 00:19:07.698 [2024-11-26 19:25:41.539168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64f40 is same with the state(6) to be set 00:19:07.698 [2024-11-26 19:25:41.539624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.698 [2024-11-26 19:25:41.539631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76080 with addr=10.0.0.2, port=4420 00:19:07.698 [2024-11-26 19:25:41.539637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76080 is same with the state(6) to be set 00:19:07.698 [2024-11-26 19:25:41.539949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.698 [2024-11-26 19:25:41.539956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a2a40 with addr=10.0.0.2, port=4420 00:19:07.698 [2024-11-26 19:25:41.539961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a2a40 is same with the state(6) to be set 00:19:07.698 [2024-11-26 19:25:41.540420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.698 [2024-11-26 19:25:41.540450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93610 with addr=10.0.0.2, port=4420 00:19:07.698 [2024-11-26 19:25:41.540459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93610 is same with the state(6) to be set 00:19:07.698 [2024-11-26 19:25:41.540561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd64f40 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.540573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76080 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.540580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a2a40 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.540587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93610 (9): Bad file descriptor 00:19:07.698 [2024-11-26 19:25:41.540624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:19:07.698 [2024-11-26 19:25:41.540630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:19:07.698 [2024-11-26 19:25:41.540638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:07.698 [2024-11-26 19:25:41.540645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:19:07.698 [2024-11-26 19:25:41.540651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:19:07.698 [2024-11-26 19:25:41.540656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:19:07.965 [2024-11-26 19:25:41.540661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:07.965 [2024-11-26 19:25:41.540667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:19:07.965 [2024-11-26 19:25:41.540676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:19:07.965 [2024-11-26 19:25:41.540682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:19:07.965 [2024-11-26 19:25:41.540686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:07.965 [2024-11-26 19:25:41.540691] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:19:07.965 [2024-11-26 19:25:41.540696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:19:07.966 [2024-11-26 19:25:41.540701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:19:07.966 [2024-11-26 19:25:41.540706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:07.966 [2024-11-26 19:25:41.540710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:19:07.966 [2024-11-26 19:25:41.547712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.547990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.547996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.966 [2024-11-26 19:25:41.548176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.966 [2024-11-26 19:25:41.548181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.548490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.548496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf81c40 is same with the state(6) to be set 00:19:07.967 [2024-11-26 19:25:41.549398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.967 [2024-11-26 19:25:41.549551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.967 [2024-11-26 19:25:41.549556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.549993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.549998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.550004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.550009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.550016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.968 [2024-11-26 19:25:41.550021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.968 [2024-11-26 19:25:41.550028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.550169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.550175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf82c00 is same with the state(6) to be set 00:19:07.969 [2024-11-26 19:25:41.551058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.969 [2024-11-26 19:25:41.551392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.969 [2024-11-26 19:25:41.551397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.970 [2024-11-26 19:25:41.551779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.970 [2024-11-26 19:25:41.551784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.551790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.551796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.551802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.551807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.551814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.551819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.551825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11814b0 is same with the state(6) to be set 00:19:07.971 [2024-11-26 19:25:41.552716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.552994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.971 [2024-11-26 19:25:41.553150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.971 [2024-11-26 19:25:41.553157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.553479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.553485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1182770 is same with the state(6) to be set 00:19:07.972 [2024-11-26 19:25:41.554370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.972 [2024-11-26 19:25:41.554510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.972 [2024-11-26 19:25:41.554515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.973 [2024-11-26 19:25:41.554974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.973 [2024-11-26 19:25:41.554980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.554985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.554993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.554998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.555128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.555133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1184cf0 is same with the state(6) to be set 00:19:07.974 [2024-11-26 19:25:41.556015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.974 [2024-11-26 19:25:41.556319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.974 [2024-11-26 19:25:41.556325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.975 [2024-11-26 19:25:41.556787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.975 [2024-11-26 19:25:41.556794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.976 [2024-11-26 19:25:41.556799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.976 [2024-11-26 19:25:41.556805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.976 [2024-11-26 19:25:41.556811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.976 [2024-11-26 19:25:41.556818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:07.976 [2024-11-26 19:25:41.556823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:07.976 [2024-11-26 19:25:41.556829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe720 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.558179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.558201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.558209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.558217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.558287] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:19:07.976 [2024-11-26 19:25:41.558298] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:19:07.976 [2024-11-26 19:25:41.558355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:19:07.976 task offset: 24576 on job bdev=Nvme8n1 fails 00:19:07.976 00:19:07.976 Latency(us) 00:19:07.976 [2024-11-26T18:25:41.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.976 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme1n1 ended in about 0.67 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme1n1 : 0.67 192.41 12.03 96.20 0.00 219312.92 15837.87 188743.68 00:19:07.976 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme2n1 ended in about 0.67 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme2n1 : 0.67 191.93 12.00 95.96 0.00 215540.91 16165.55 180879.36 00:19:07.976 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme3n1 ended in about 0.65 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme3n1 : 0.65 294.59 18.41 98.20 0.00 154534.61 9229.65 173888.85 00:19:07.976 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme4n1 ended in about 0.65 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme4n1 : 0.65 294.22 18.39 98.07 0.00 151496.85 9830.40 173888.85 00:19:07.976 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme5n1 ended in about 0.65 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme5n1 : 0.65 293.85 18.37 97.95 0.00 148386.13 10431.15 161655.47 00:19:07.976 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme6n1 ended in about 0.67 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme6n1 : 0.67 191.46 11.97 95.73 0.00 198675.34 16274.77 199229.44 00:19:07.976 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme7n1 ended in about 0.67 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme7n1 : 0.67 286.48 17.90 95.49 0.00 146050.13 10103.47 178257.92 00:19:07.976 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme8n1 ended in about 0.65 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme8n1 : 0.65 295.04 18.44 98.35 0.00 137935.89 9175.04 177384.11 00:19:07.976 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme9n1 ended in about 0.67 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme9n1 : 0.67 190.52 11.91 95.26 0.00 186639.36 14964.05 183500.80 00:19:07.976 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:07.976 Job: Nvme10n1 ended in about 0.67 seconds with error 00:19:07.976 Verification LBA range: start 0x0 length 0x400 00:19:07.976 Nvme10n1 : 0.67 190.04 11.88 95.02 0.00 182801.64 13653.33 183500.80 00:19:07.976 [2024-11-26T18:25:41.841Z] =================================================================================================================== 00:19:07.976 [2024-11-26T18:25:41.841Z] Total : 2420.53 151.28 966.24 0.00 170357.86 9175.04 199229.44 00:19:07.976 [2024-11-26 19:25:41.576981] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:07.976 [2024-11-26 19:25:41.577020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.577450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.577466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd67b10 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.577475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd67b10 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.577821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.577829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd74430 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.577834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd74430 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.578175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.578183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd75490 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.578188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd75490 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.578573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.578580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11dfed0 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.578585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11dfed0 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.579717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.579728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.579735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.579742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:19:07.976 [2024-11-26 19:25:41.579996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.580007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f5bb0 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.580013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f5bb0 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.580331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.580338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11e7720 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.580344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e7720 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.580358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd67b10 (9): Bad file descriptor 00:19:07.976 [2024-11-26 19:25:41.580368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd74430 (9): Bad file descriptor 00:19:07.976 [2024-11-26 19:25:41.580374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd75490 (9): Bad file descriptor 00:19:07.976 [2024-11-26 19:25:41.580381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11dfed0 (9): Bad file descriptor 00:19:07.976 [2024-11-26 19:25:41.580408] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:19:07.976 [2024-11-26 19:25:41.580417] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:19:07.976 [2024-11-26 19:25:41.580425] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:19:07.976 [2024-11-26 19:25:41.580432] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:19:07.976 [2024-11-26 19:25:41.580810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.580819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc93610 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.580824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc93610 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.581118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.581126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a2a40 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.581131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a2a40 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.581445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.581451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd76080 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.581456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd76080 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.581668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:07.976 [2024-11-26 19:25:41.581675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd64f40 with addr=10.0.0.2, port=4420 00:19:07.976 [2024-11-26 19:25:41.581680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64f40 is same with the state(6) to be set 00:19:07.976 [2024-11-26 19:25:41.581687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f5bb0 (9): Bad file descriptor 00:19:07.976 [2024-11-26 19:25:41.581693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11e7720 (9): Bad file descriptor 00:19:07.976 [2024-11-26 19:25:41.581700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc93610 (9): Bad file descriptor 00:19:07.977 [2024-11-26 19:25:41.581847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a2a40 (9): Bad file descriptor 00:19:07.977 [2024-11-26 19:25:41.581854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd76080 (9): Bad file descriptor 00:19:07.977 [2024-11-26 19:25:41.581860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd64f40 (9): Bad file descriptor 00:19:07.977 [2024-11-26 19:25:41.581866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:19:07.977 [2024-11-26 19:25:41.581975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:19:07.977 [2024-11-26 19:25:41.581979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:19:07.977 [2024-11-26 19:25:41.581984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:19:07.977 [2024-11-26 19:25:41.581988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:19:07.977 19:25:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3777678 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3777678 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3777678 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:08.918 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.179 rmmod nvme_tcp 00:19:09.179 rmmod nvme_fabrics 00:19:09.179 rmmod nvme_keyring 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3777297 ']' 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3777297 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3777297 ']' 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3777297 00:19:09.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3777297) - No such process 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3777297 is not found' 00:19:09.179 Process with pid 3777297 is not found 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.179 19:25:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:11.083 00:19:11.083 real 0m7.189s 00:19:11.083 user 0m17.049s 00:19:11.083 sys 0m0.958s 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:11.083 ************************************ 00:19:11.083 END TEST nvmf_shutdown_tc3 00:19:11.083 ************************************ 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:11.083 ************************************ 00:19:11.083 START TEST nvmf_shutdown_tc4 00:19:11.083 ************************************ 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:11.083 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:19:11.084 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:11.343 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.343 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.343 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.343 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:11.344 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:11.344 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:11.344 Found net devices under 0000:31:00.0: cvl_0_0 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:11.344 Found net devices under 0000:31:00.1: cvl_0_1 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.344 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:11.345 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:11.345 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.345 19:25:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:11.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:19:11.345 00:19:11.345 --- 10.0.0.2 ping statistics --- 00:19:11.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.345 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:19:11.345 00:19:11.345 --- 10.0.0.1 ping statistics --- 00:19:11.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.345 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:11.345 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3779134 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3779134 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3779134 ']' 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:11.604 19:25:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:11.604 [2024-11-26 19:25:45.272141] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:19:11.604 [2024-11-26 19:25:45.272188] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.604 [2024-11-26 19:25:45.343824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.604 [2024-11-26 19:25:45.373709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.604 [2024-11-26 19:25:45.373737] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.604 [2024-11-26 19:25:45.373743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.604 [2024-11-26 19:25:45.373748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.604 [2024-11-26 19:25:45.373752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.604 [2024-11-26 19:25:45.375021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.604 [2024-11-26 19:25:45.375043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.604 [2024-11-26 19:25:45.375182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.604 [2024-11-26 19:25:45.375184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:12.541 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.541 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:19:12.541 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:12.541 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.541 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:12.541 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:12.542 [2024-11-26 19:25:46.072711] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.542 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:12.542 Malloc1 00:19:12.542 [2024-11-26 19:25:46.167877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.542 Malloc2 00:19:12.542 Malloc3 00:19:12.542 Malloc4 00:19:12.542 Malloc5 00:19:12.542 Malloc6 00:19:12.542 Malloc7 00:19:12.802 Malloc8 00:19:12.802 Malloc9 00:19:12.803 Malloc10 00:19:12.803 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.803 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:19:12.803 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.803 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:12.803 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3779369 00:19:12.803 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:19:12.803 19:25:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:19:12.803 [2024-11-26 19:25:46.597864] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3779134 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3779134 ']' 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3779134 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3779134 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3779134' 00:19:18.095 killing process with pid 3779134 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3779134 00:19:18.095 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3779134 00:19:18.095 [2024-11-26 19:25:51.606575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b29cd0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b29cd0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b29cd0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b29cd0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.606998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a1c0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a690 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a690 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2140 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2140 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2140 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2140 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2140 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.607919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2140 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.608558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdff50 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.609133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0910 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.609150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0910 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.609155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0910 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.609160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0910 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.609176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0910 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.609181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0910 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.609186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0910 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.609191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be0910 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.611706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3650 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.611723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3650 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.611728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3650 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.611733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3650 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.611738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3650 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.611743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3650 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.611748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3650 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.611753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3650 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3b20 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3b20 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3b20 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3b20 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3b20 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3b20 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3b20 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3ff0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3ff0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3ff0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3ff0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3ff0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3ff0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3ff0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3ff0 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3180 is same with the state(6) to be set 00:19:18.095 [2024-11-26 19:25:51.612961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3180 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.612966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3180 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.612971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3180 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.612976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3180 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.612981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3180 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.612986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb3180 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.613853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb2930 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.613868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb2930 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.613873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb2930 is same with the state(6) to be set 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 [2024-11-26 19:25:51.613878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb2930 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.613887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb2930 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.613893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb2930 is same with the state(6) to be set 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 [2024-11-26 19:25:51.614381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 [2024-11-26 19:25:51.614963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd6c0 is same with the state(6) to be set 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 [2024-11-26 19:25:51.614978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd6c0 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.614984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd6c0 is same with the state(6) to be set 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 [2024-11-26 19:25:51.614989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd6c0 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.614993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd6c0 is same with the state(6) to be set 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 [2024-11-26 19:25:51.615065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.096 starting I/O failed: -6 00:19:18.096 [2024-11-26 19:25:51.615178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcdbb0 is same with the state(6) to be set 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 [2024-11-26 19:25:51.615191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcdbb0 is same with the state(6) to be set 00:19:18.096 starting I/O failed: -6 00:19:18.096 [2024-11-26 19:25:51.615197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcdbb0 is same with the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.615202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcdbb0 is same with the state(6) to be set 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 [2024-11-26 19:25:51.615207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcdbb0 is same with the state(6) to be set 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.096 starting I/O failed: -6 00:19:18.096 [2024-11-26 19:25:51.615409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce080 is same with Write completed with error (sct=0, sc=8) 00:19:18.096 the state(6) to be set 00:19:18.096 [2024-11-26 19:25:51.615426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce080 is same with the state(6) to be set 00:19:18.096 Write completed with error (sct=0, sc=8) 00:19:18.097 [2024-11-26 19:25:51.615431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce080 is same with the state(6) to be set 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.615436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bce080 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 [2024-11-26 19:25:51.615644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd1d0 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 [2024-11-26 19:25:51.615658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd1d0 is same with the state(6) to be set 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.615664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd1d0 is same with the state(6) to be set 00:19:18.097 [2024-11-26 19:25:51.615669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd1d0 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 [2024-11-26 19:25:51.615674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd1d0 is same with the state(6) to be set 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.615679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd1d0 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 [2024-11-26 19:25:51.615684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcd1d0 is same with the state(6) to be set 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.615754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.615958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb4990 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 [2024-11-26 19:25:51.615968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb4990 is same with the state(6) to be set 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.615973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb4990 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 [2024-11-26 19:25:51.615978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb4990 is same with the state(6) to be set 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.615984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb4990 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.616389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb4e60 is same with Write completed with error (sct=0, sc=8) 00:19:18.097 the state(6) to be set 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.616401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb4e60 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.616557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb5330 is same with the state(6) to be set 00:19:18.097 [2024-11-26 19:25:51.616567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb5330 is same with the state(6) to be set 00:19:18.097 [2024-11-26 19:25:51.616572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb5330 is same with the state(6) to be set 00:19:18.097 [2024-11-26 19:25:51.616577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb5330 is same with the state(6) to be set 00:19:18.097 Write completed with error (sct=0, sc=8) 00:19:18.097 starting I/O failed: -6 00:19:18.097 [2024-11-26 19:25:51.616707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.097 NVMe io qpair process completion error 00:19:18.098 [2024-11-26 19:25:51.616961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb44c0 is same with the state(6) to be set 00:19:18.098 [2024-11-26 19:25:51.616975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb44c0 is same with the state(6) to be set 00:19:18.098 [2024-11-26 19:25:51.616980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb44c0 is same with the state(6) to be set 00:19:18.098 [2024-11-26 19:25:51.616985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb44c0 is same with the state(6) to be set 00:19:18.098 [2024-11-26 19:25:51.616990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb44c0 is same with the state(6) to be set 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 [2024-11-26 19:25:51.617642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 [2024-11-26 19:25:51.618287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.098 Write completed with error (sct=0, sc=8) 00:19:18.098 starting I/O failed: -6 00:19:18.099 [2024-11-26 19:25:51.618940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 [2024-11-26 19:25:51.620125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.099 NVMe io qpair process completion error 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 [2024-11-26 19:25:51.621105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.099 starting I/O failed: -6 00:19:18.099 starting I/O failed: -6 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 Write completed with error (sct=0, sc=8) 00:19:18.099 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 [2024-11-26 19:25:51.621790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 [2024-11-26 19:25:51.622468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.100 Write completed with error (sct=0, sc=8) 00:19:18.100 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 [2024-11-26 19:25:51.623746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.101 NVMe io qpair process completion error 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 [2024-11-26 19:25:51.624604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 [2024-11-26 19:25:51.625260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.101 Write completed with error (sct=0, sc=8) 00:19:18.101 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 [2024-11-26 19:25:51.625934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 [2024-11-26 19:25:51.627594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.102 NVMe io qpair process completion error 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.102 starting I/O failed: -6 00:19:18.102 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 [2024-11-26 19:25:51.628527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 [2024-11-26 19:25:51.629180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 [2024-11-26 19:25:51.629875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.103 Write completed with error (sct=0, sc=8) 00:19:18.103 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 [2024-11-26 19:25:51.632078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.104 NVMe io qpair process completion error 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 [2024-11-26 19:25:51.633088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 Write completed with error (sct=0, sc=8) 00:19:18.104 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 [2024-11-26 19:25:51.633791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 [2024-11-26 19:25:51.634473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.105 Write completed with error (sct=0, sc=8) 00:19:18.105 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 [2024-11-26 19:25:51.635635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.106 NVMe io qpair process completion error 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 [2024-11-26 19:25:51.636415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 Write completed with error (sct=0, sc=8) 00:19:18.106 starting I/O failed: -6 00:19:18.106 [2024-11-26 19:25:51.637009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 [2024-11-26 19:25:51.637699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.107 starting I/O failed: -6 00:19:18.107 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 [2024-11-26 19:25:51.639702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.108 NVMe io qpair process completion error 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 [2024-11-26 19:25:51.640479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 [2024-11-26 19:25:51.641096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 Write completed with error (sct=0, sc=8) 00:19:18.108 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 [2024-11-26 19:25:51.641773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 [2024-11-26 19:25:51.643930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.109 NVMe io qpair process completion error 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 Write completed with error (sct=0, sc=8) 00:19:18.109 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 [2024-11-26 19:25:51.644899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 [2024-11-26 19:25:51.645601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 Write completed with error (sct=0, sc=8) 00:19:18.110 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 [2024-11-26 19:25:51.646288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 [2024-11-26 19:25:51.647814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.111 NVMe io qpair process completion error 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 starting I/O failed: -6 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.111 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 [2024-11-26 19:25:51.648768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 [2024-11-26 19:25:51.649484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 [2024-11-26 19:25:51.650156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.112 Write completed with error (sct=0, sc=8) 00:19:18.112 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 Write completed with error (sct=0, sc=8) 00:19:18.113 starting I/O failed: -6 00:19:18.113 [2024-11-26 19:25:51.652428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:19:18.113 NVMe io qpair process completion error 00:19:18.113 Initializing NVMe Controllers 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:19:18.113 Controller IO queue size 128, less than required. 00:19:18.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:19:18.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:19:18.113 Initialization complete. Launching workers. 00:19:18.113 ======================================================== 00:19:18.113 Latency(us) 00:19:18.113 Device Information : IOPS MiB/s Average min max 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2540.81 109.18 50388.24 676.03 96905.51 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2492.15 107.08 50974.37 582.50 101930.28 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2504.58 107.62 50730.57 657.51 101695.46 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2594.31 111.47 48987.00 511.96 83790.63 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2552.60 109.68 49798.37 678.23 101690.67 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2538.28 109.07 50096.22 397.95 82944.98 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2544.81 109.35 49990.55 418.01 82504.91 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2697.95 115.93 47161.59 593.77 82129.34 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2526.27 108.55 50383.88 650.96 83938.39 00:19:18.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2515.53 108.09 50621.69 594.59 94417.54 00:19:18.113 ======================================================== 00:19:18.113 Total : 25507.28 1096.02 49890.14 397.95 101930.28 00:19:18.113 00:19:18.113 [2024-11-26 19:25:51.656640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88050 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87390 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d89360 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d889e0 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d87060 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d886b0 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d89540 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d879f0 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d876c0 is same with the state(6) to be set 00:19:18.113 [2024-11-26 19:25:51.656841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88380 is same with the state(6) to be set 00:19:18.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:19:18.113 19:25:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3779369 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3779369 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3779369 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:19.053 rmmod nvme_tcp 00:19:19.053 rmmod nvme_fabrics 00:19:19.053 rmmod nvme_keyring 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3779134 ']' 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3779134 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3779134 ']' 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3779134 00:19:19.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3779134) - No such process 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3779134 is not found' 00:19:19.053 Process with pid 3779134 is not found 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:19:19.053 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:19.314 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:19.314 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:19.314 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.314 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.314 19:25:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.223 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:21.223 00:19:21.223 real 0m10.028s 00:19:21.223 user 0m27.300s 00:19:21.223 sys 0m3.933s 00:19:21.223 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.223 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:19:21.223 ************************************ 00:19:21.223 END TEST nvmf_shutdown_tc4 00:19:21.223 ************************************ 00:19:21.223 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:19:21.223 00:19:21.223 real 0m38.423s 00:19:21.223 user 1m35.953s 00:19:21.223 sys 0m10.892s 00:19:21.223 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.223 19:25:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:21.223 ************************************ 00:19:21.223 END TEST nvmf_shutdown 00:19:21.223 ************************************ 00:19:21.223 19:25:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:21.223 19:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.223 19:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.223 19:25:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.223 ************************************ 00:19:21.223 START TEST nvmf_nsid 00:19:21.223 ************************************ 00:19:21.223 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:19:21.223 * Looking for test storage... 00:19:21.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:19:21.483 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.484 --rc genhtml_branch_coverage=1 00:19:21.484 --rc genhtml_function_coverage=1 00:19:21.484 --rc genhtml_legend=1 00:19:21.484 --rc geninfo_all_blocks=1 00:19:21.484 --rc geninfo_unexecuted_blocks=1 00:19:21.484 00:19:21.484 ' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.484 --rc genhtml_branch_coverage=1 00:19:21.484 --rc genhtml_function_coverage=1 00:19:21.484 --rc genhtml_legend=1 00:19:21.484 --rc geninfo_all_blocks=1 00:19:21.484 --rc geninfo_unexecuted_blocks=1 00:19:21.484 00:19:21.484 ' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.484 --rc genhtml_branch_coverage=1 00:19:21.484 --rc genhtml_function_coverage=1 00:19:21.484 --rc genhtml_legend=1 00:19:21.484 --rc geninfo_all_blocks=1 00:19:21.484 --rc geninfo_unexecuted_blocks=1 00:19:21.484 00:19:21.484 ' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:21.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.484 --rc genhtml_branch_coverage=1 00:19:21.484 --rc genhtml_function_coverage=1 00:19:21.484 --rc genhtml_legend=1 00:19:21.484 --rc geninfo_all_blocks=1 00:19:21.484 --rc geninfo_unexecuted_blocks=1 00:19:21.484 00:19:21.484 ' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:19:21.484 19:25:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:26.762 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:26.763 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:26.763 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:26.763 Found net devices under 0000:31:00.0: cvl_0_0 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:26.763 Found net devices under 0000:31:00.1: cvl_0_1 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:26.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:26.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:19:26.763 00:19:26.763 --- 10.0.0.2 ping statistics --- 00:19:26.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.763 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:26.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:26.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:19:26.763 00:19:26.763 --- 10.0.0.1 ping statistics --- 00:19:26.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:26.763 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3785143 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3785143 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3785143 ']' 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:26.763 19:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:19:27.024 [2024-11-26 19:26:00.639887] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:19:27.024 [2024-11-26 19:26:00.639954] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.024 [2024-11-26 19:26:00.731708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.024 [2024-11-26 19:26:00.782112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.024 [2024-11-26 19:26:00.782182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.024 [2024-11-26 19:26:00.782192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.024 [2024-11-26 19:26:00.782199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.024 [2024-11-26 19:26:00.782205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.024 [2024-11-26 19:26:00.782986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3785232 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:19:27.593 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:19:27.594 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:19:27.594 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ea9e9e0f-6ffa-48b4-9d5d-4437b3a9f71b 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=0cc448a6-91b3-4fb5-9e70-34da178842b9 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1ad20586-b8df-410e-9d62-274d99bc2eac 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:27.854 null0 00:19:27.854 null1 00:19:27.854 [2024-11-26 19:26:01.487450] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:19:27.854 [2024-11-26 19:26:01.487501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785232 ] 00:19:27.854 null2 00:19:27.854 [2024-11-26 19:26:01.499094] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.854 [2024-11-26 19:26:01.523280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3785232 /var/tmp/tgt2.sock 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3785232 ']' 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:19:27.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.854 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:27.854 [2024-11-26 19:26:01.565253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.854 [2024-11-26 19:26:01.602281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.113 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.113 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:19:28.113 19:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:19:28.373 [2024-11-26 19:26:02.057168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.373 [2024-11-26 19:26:02.073307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:19:28.373 nvme0n1 nvme0n2 00:19:28.373 nvme1n1 00:19:28.373 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:19:28.373 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:19:28.373 19:26:02 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:19:29.754 19:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ea9e9e0f-6ffa-48b4-9d5d-4437b3a9f71b 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:19:30.690 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ea9e9e0f6ffa48b49d5d4437b3a9f71b 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EA9E9E0F6FFA48B49D5D4437B3A9F71B 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ EA9E9E0F6FFA48B49D5D4437B3A9F71B == \E\A\9\E\9\E\0\F\6\F\F\A\4\8\B\4\9\D\5\D\4\4\3\7\B\3\A\9\F\7\1\B ]] 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 0cc448a6-91b3-4fb5-9e70-34da178842b9 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0cc448a691b34fb59e7034da178842b9 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0CC448A691B34FB59E7034DA178842B9 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 0CC448A691B34FB59E7034DA178842B9 == \0\C\C\4\4\8\A\6\9\1\B\3\4\F\B\5\9\E\7\0\3\4\D\A\1\7\8\8\4\2\B\9 ]] 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1ad20586-b8df-410e-9d62-274d99bc2eac 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1ad20586b8df410e9d62274d99bc2eac 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1AD20586B8DF410E9D62274D99BC2EAC 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1AD20586B8DF410E9D62274D99BC2EAC == \1\A\D\2\0\5\8\6\B\8\D\F\4\1\0\E\9\D\6\2\2\7\4\D\9\9\B\C\2\E\A\C ]] 00:19:30.949 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:19:31.208 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:19:31.208 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:19:31.208 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3785232 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3785232 ']' 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3785232 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3785232 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3785232' 00:19:31.209 killing process with pid 3785232 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3785232 00:19:31.209 19:26:04 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3785232 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.468 rmmod nvme_tcp 00:19:31.468 rmmod nvme_fabrics 00:19:31.468 rmmod nvme_keyring 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3785143 ']' 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3785143 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3785143 ']' 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3785143 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3785143 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3785143' 00:19:31.468 killing process with pid 3785143 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3785143 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3785143 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.468 19:26:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.004 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.004 00:19:34.004 real 0m12.318s 00:19:34.004 user 0m9.882s 00:19:34.004 sys 0m4.993s 00:19:34.004 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.004 19:26:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:19:34.004 ************************************ 00:19:34.004 END TEST nvmf_nsid 00:19:34.004 ************************************ 00:19:34.004 19:26:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:19:34.004 00:19:34.004 real 11m32.964s 00:19:34.004 user 25m16.133s 00:19:34.004 sys 3m8.116s 00:19:34.004 19:26:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.004 19:26:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.004 ************************************ 00:19:34.004 END TEST nvmf_target_extra 00:19:34.004 ************************************ 00:19:34.004 19:26:07 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:34.004 19:26:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.004 19:26:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.004 19:26:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:34.004 ************************************ 00:19:34.004 START TEST nvmf_host 00:19:34.004 ************************************ 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:19:34.004 * Looking for test storage... 00:19:34.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:34.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.004 --rc genhtml_branch_coverage=1 00:19:34.004 --rc genhtml_function_coverage=1 00:19:34.004 --rc genhtml_legend=1 00:19:34.004 --rc geninfo_all_blocks=1 00:19:34.004 --rc geninfo_unexecuted_blocks=1 00:19:34.004 00:19:34.004 ' 00:19:34.004 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:34.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.004 --rc genhtml_branch_coverage=1 00:19:34.004 --rc genhtml_function_coverage=1 00:19:34.004 --rc genhtml_legend=1 00:19:34.004 --rc geninfo_all_blocks=1 00:19:34.004 --rc geninfo_unexecuted_blocks=1 00:19:34.005 00:19:34.005 ' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:34.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.005 --rc genhtml_branch_coverage=1 00:19:34.005 --rc genhtml_function_coverage=1 00:19:34.005 --rc genhtml_legend=1 00:19:34.005 --rc geninfo_all_blocks=1 00:19:34.005 --rc geninfo_unexecuted_blocks=1 00:19:34.005 00:19:34.005 ' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:34.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.005 --rc genhtml_branch_coverage=1 00:19:34.005 --rc genhtml_function_coverage=1 00:19:34.005 --rc genhtml_legend=1 00:19:34.005 --rc geninfo_all_blocks=1 00:19:34.005 --rc geninfo_unexecuted_blocks=1 00:19:34.005 00:19:34.005 ' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.005 ************************************ 00:19:34.005 START TEST nvmf_multicontroller 00:19:34.005 ************************************ 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:34.005 * Looking for test storage... 00:19:34.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:34.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.005 --rc genhtml_branch_coverage=1 00:19:34.005 --rc genhtml_function_coverage=1 00:19:34.005 --rc genhtml_legend=1 00:19:34.005 --rc geninfo_all_blocks=1 00:19:34.005 --rc geninfo_unexecuted_blocks=1 00:19:34.005 00:19:34.005 ' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:34.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.005 --rc genhtml_branch_coverage=1 00:19:34.005 --rc genhtml_function_coverage=1 00:19:34.005 --rc genhtml_legend=1 00:19:34.005 --rc geninfo_all_blocks=1 00:19:34.005 --rc geninfo_unexecuted_blocks=1 00:19:34.005 00:19:34.005 ' 00:19:34.005 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:34.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.005 --rc genhtml_branch_coverage=1 00:19:34.005 --rc genhtml_function_coverage=1 00:19:34.006 --rc genhtml_legend=1 00:19:34.006 --rc geninfo_all_blocks=1 00:19:34.006 --rc geninfo_unexecuted_blocks=1 00:19:34.006 00:19:34.006 ' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:34.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.006 --rc genhtml_branch_coverage=1 00:19:34.006 --rc genhtml_function_coverage=1 00:19:34.006 --rc genhtml_legend=1 00:19:34.006 --rc geninfo_all_blocks=1 00:19:34.006 --rc geninfo_unexecuted_blocks=1 00:19:34.006 00:19:34.006 ' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.006 19:26:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:39.280 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.280 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.280 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.280 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.280 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.280 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.280 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:39.281 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:39.281 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:39.281 Found net devices under 0000:31:00.0: cvl_0_0 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:39.281 Found net devices under 0000:31:00.1: cvl_0_1 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.281 19:26:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:39.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.634 ms 00:19:39.282 00:19:39.282 --- 10.0.0.2 ping statistics --- 00:19:39.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.282 rtt min/avg/max/mdev = 0.634/0.634/0.634/0.000 ms 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:19:39.282 00:19:39.282 --- 10.0.0.1 ping statistics --- 00:19:39.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.282 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.282 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3790658 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3790658 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3790658 ']' 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:39.541 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:39.541 [2024-11-26 19:26:13.195014] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:19:39.541 [2024-11-26 19:26:13.195064] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.541 [2024-11-26 19:26:13.279967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.541 [2024-11-26 19:26:13.317400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.541 [2024-11-26 19:26:13.317432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.541 [2024-11-26 19:26:13.317439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.541 [2024-11-26 19:26:13.317446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.541 [2024-11-26 19:26:13.317452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.541 [2024-11-26 19:26:13.318979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.541 [2024-11-26 19:26:13.319138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.541 [2024-11-26 19:26:13.319144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.109 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.109 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:19:40.109 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.109 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.109 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.369 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.369 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 [2024-11-26 19:26:14.001784] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 Malloc0 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 [2024-11-26 19:26:14.054967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 [2024-11-26 19:26:14.062910] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 Malloc1 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3790696 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3790696 /var/tmp/bdevperf.sock 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3790696 ']' 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.369 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.370 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.370 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.370 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:40.370 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:41.425 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.425 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:19:41.425 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:19:41.425 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.425 19:26:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.425 NVMe0n1 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.425 1 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.425 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.425 request: 00:19:41.425 { 00:19:41.425 "name": "NVMe0", 00:19:41.426 "trtype": "tcp", 00:19:41.426 "traddr": "10.0.0.2", 00:19:41.426 "adrfam": "ipv4", 00:19:41.426 "trsvcid": "4420", 00:19:41.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.426 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:41.426 "hostaddr": "10.0.0.1", 00:19:41.426 "prchk_reftag": false, 00:19:41.426 "prchk_guard": false, 00:19:41.426 "hdgst": false, 00:19:41.426 "ddgst": false, 00:19:41.426 "allow_unrecognized_csi": false, 00:19:41.426 "method": "bdev_nvme_attach_controller", 00:19:41.426 "req_id": 1 00:19:41.426 } 00:19:41.426 Got JSON-RPC error response 00:19:41.426 response: 00:19:41.426 { 00:19:41.426 "code": -114, 00:19:41.426 "message": "A controller named NVMe0 already exists with the specified network path" 00:19:41.426 } 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.426 request: 00:19:41.426 { 00:19:41.426 "name": "NVMe0", 00:19:41.426 "trtype": "tcp", 00:19:41.426 "traddr": "10.0.0.2", 00:19:41.426 "adrfam": "ipv4", 00:19:41.426 "trsvcid": "4420", 00:19:41.426 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:41.426 "hostaddr": "10.0.0.1", 00:19:41.426 "prchk_reftag": false, 00:19:41.426 "prchk_guard": false, 00:19:41.426 "hdgst": false, 00:19:41.426 "ddgst": false, 00:19:41.426 "allow_unrecognized_csi": false, 00:19:41.426 "method": "bdev_nvme_attach_controller", 00:19:41.426 "req_id": 1 00:19:41.426 } 00:19:41.426 Got JSON-RPC error response 00:19:41.426 response: 00:19:41.426 { 00:19:41.426 "code": -114, 00:19:41.426 "message": "A controller named NVMe0 already exists with the specified network path" 00:19:41.426 } 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.426 request: 00:19:41.426 { 00:19:41.426 "name": "NVMe0", 00:19:41.426 "trtype": "tcp", 00:19:41.426 "traddr": "10.0.0.2", 00:19:41.426 "adrfam": "ipv4", 00:19:41.426 "trsvcid": "4420", 00:19:41.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.426 "hostaddr": "10.0.0.1", 00:19:41.426 "prchk_reftag": false, 00:19:41.426 "prchk_guard": false, 00:19:41.426 "hdgst": false, 00:19:41.426 "ddgst": false, 00:19:41.426 "multipath": "disable", 00:19:41.426 "allow_unrecognized_csi": false, 00:19:41.426 "method": "bdev_nvme_attach_controller", 00:19:41.426 "req_id": 1 00:19:41.426 } 00:19:41.426 Got JSON-RPC error response 00:19:41.426 response: 00:19:41.426 { 00:19:41.426 "code": -114, 00:19:41.426 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:19:41.426 } 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.426 request: 00:19:41.426 { 00:19:41.426 "name": "NVMe0", 00:19:41.426 "trtype": "tcp", 00:19:41.426 "traddr": "10.0.0.2", 00:19:41.426 "adrfam": "ipv4", 00:19:41.426 "trsvcid": "4420", 00:19:41.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.426 "hostaddr": "10.0.0.1", 00:19:41.426 "prchk_reftag": false, 00:19:41.426 "prchk_guard": false, 00:19:41.426 "hdgst": false, 00:19:41.426 "ddgst": false, 00:19:41.426 "multipath": "failover", 00:19:41.426 "allow_unrecognized_csi": false, 00:19:41.426 "method": "bdev_nvme_attach_controller", 00:19:41.426 "req_id": 1 00:19:41.426 } 00:19:41.426 Got JSON-RPC error response 00:19:41.426 response: 00:19:41.426 { 00:19:41.426 "code": -114, 00:19:41.426 "message": "A controller named NVMe0 already exists with the specified network path" 00:19:41.426 } 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.426 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.687 NVMe0n1 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.687 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:41.687 19:26:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:43.068 { 00:19:43.069 "results": [ 00:19:43.069 { 00:19:43.069 "job": "NVMe0n1", 00:19:43.069 "core_mask": "0x1", 00:19:43.069 "workload": "write", 00:19:43.069 "status": "finished", 00:19:43.069 "queue_depth": 128, 00:19:43.069 "io_size": 4096, 00:19:43.069 "runtime": 1.005451, 00:19:43.069 "iops": 28797.02740362285, 00:19:43.069 "mibps": 112.48838829540176, 00:19:43.069 "io_failed": 0, 00:19:43.069 "io_timeout": 0, 00:19:43.069 "avg_latency_us": 4434.6285077479215, 00:19:43.069 "min_latency_us": 2116.266666666667, 00:19:43.069 "max_latency_us": 12615.68 00:19:43.069 } 00:19:43.069 ], 00:19:43.069 "core_count": 1 00:19:43.069 } 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3790696 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3790696 ']' 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3790696 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3790696 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3790696' 00:19:43.069 killing process with pid 3790696 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3790696 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3790696 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:19:43.069 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:43.069 [2024-11-26 19:26:14.152762] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:19:43.069 [2024-11-26 19:26:14.152839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3790696 ] 00:19:43.069 [2024-11-26 19:26:14.239778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.069 [2024-11-26 19:26:14.293973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.069 [2024-11-26 19:26:15.409563] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 4ebe1f17-4b7e-4ec7-8a8f-51ae809356cf already exists 00:19:43.069 [2024-11-26 19:26:15.409593] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:4ebe1f17-4b7e-4ec7-8a8f-51ae809356cf alias for bdev NVMe1n1 00:19:43.069 [2024-11-26 19:26:15.409602] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:43.069 Running I/O for 1 seconds... 00:19:43.069 28763.00 IOPS, 112.36 MiB/s 00:19:43.069 Latency(us) 00:19:43.069 [2024-11-26T18:26:16.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.069 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:43.069 NVMe0n1 : 1.01 28797.03 112.49 0.00 0.00 4434.63 2116.27 12615.68 00:19:43.069 [2024-11-26T18:26:16.934Z] =================================================================================================================== 00:19:43.069 [2024-11-26T18:26:16.934Z] Total : 28797.03 112.49 0.00 0.00 4434.63 2116.27 12615.68 00:19:43.069 Received shutdown signal, test time was about 1.000000 seconds 00:19:43.069 00:19:43.069 Latency(us) 00:19:43.069 [2024-11-26T18:26:16.934Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.069 [2024-11-26T18:26:16.934Z] =================================================================================================================== 00:19:43.069 [2024-11-26T18:26:16.934Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:43.069 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:43.069 rmmod nvme_tcp 00:19:43.069 rmmod nvme_fabrics 00:19:43.069 rmmod nvme_keyring 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3790658 ']' 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3790658 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3790658 ']' 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3790658 00:19:43.069 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:19:43.070 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.070 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3790658 00:19:43.070 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.070 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.070 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3790658' 00:19:43.070 killing process with pid 3790658 00:19:43.070 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3790658 00:19:43.070 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3790658 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.330 19:26:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.238 19:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:45.238 00:19:45.238 real 0m11.431s 00:19:45.238 user 0m15.230s 00:19:45.238 sys 0m4.736s 00:19:45.238 19:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.238 19:26:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:45.238 ************************************ 00:19:45.238 END TEST nvmf_multicontroller 00:19:45.238 ************************************ 00:19:45.238 19:26:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:45.238 19:26:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:45.238 19:26:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.238 19:26:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:45.238 ************************************ 00:19:45.238 START TEST nvmf_aer 00:19:45.238 ************************************ 00:19:45.238 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:45.499 * Looking for test storage... 00:19:45.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:45.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.499 --rc genhtml_branch_coverage=1 00:19:45.499 --rc genhtml_function_coverage=1 00:19:45.499 --rc genhtml_legend=1 00:19:45.499 --rc geninfo_all_blocks=1 00:19:45.499 --rc geninfo_unexecuted_blocks=1 00:19:45.499 00:19:45.499 ' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:45.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.499 --rc genhtml_branch_coverage=1 00:19:45.499 --rc genhtml_function_coverage=1 00:19:45.499 --rc genhtml_legend=1 00:19:45.499 --rc geninfo_all_blocks=1 00:19:45.499 --rc geninfo_unexecuted_blocks=1 00:19:45.499 00:19:45.499 ' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:45.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.499 --rc genhtml_branch_coverage=1 00:19:45.499 --rc genhtml_function_coverage=1 00:19:45.499 --rc genhtml_legend=1 00:19:45.499 --rc geninfo_all_blocks=1 00:19:45.499 --rc geninfo_unexecuted_blocks=1 00:19:45.499 00:19:45.499 ' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:45.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:45.499 --rc genhtml_branch_coverage=1 00:19:45.499 --rc genhtml_function_coverage=1 00:19:45.499 --rc genhtml_legend=1 00:19:45.499 --rc geninfo_all_blocks=1 00:19:45.499 --rc geninfo_unexecuted_blocks=1 00:19:45.499 00:19:45.499 ' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:45.499 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.499 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:45.500 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.500 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:45.500 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:45.500 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:19:45.500 19:26:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.777 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.777 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:19:50.777 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:50.777 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:50.778 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:50.778 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:50.778 Found net devices under 0000:31:00.0: cvl_0_0 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:50.778 Found net devices under 0000:31:00.1: cvl_0_1 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:50.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:19:50.778 00:19:50.778 --- 10.0.0.2 ping statistics --- 00:19:50.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.778 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:19:50.778 00:19:50.778 --- 10.0.0.1 ping statistics --- 00:19:50.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.778 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3795722 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3795722 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3795722 ']' 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.778 19:26:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:50.778 [2024-11-26 19:26:24.641106] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:19:50.779 [2024-11-26 19:26:24.641172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:51.038 [2024-11-26 19:26:24.733091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:51.038 [2024-11-26 19:26:24.786988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:51.038 [2024-11-26 19:26:24.787044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:51.038 [2024-11-26 19:26:24.787053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:51.038 [2024-11-26 19:26:24.787061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:51.038 [2024-11-26 19:26:24.787068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:51.038 [2024-11-26 19:26:24.789453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.038 [2024-11-26 19:26:24.789612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.038 [2024-11-26 19:26:24.789770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.038 [2024-11-26 19:26:24.789771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.605 [2024-11-26 19:26:25.460494] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.605 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.864 Malloc0 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.864 [2024-11-26 19:26:25.519756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:51.864 [ 00:19:51.864 { 00:19:51.864 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:51.864 "subtype": "Discovery", 00:19:51.864 "listen_addresses": [], 00:19:51.864 "allow_any_host": true, 00:19:51.864 "hosts": [] 00:19:51.864 }, 00:19:51.864 { 00:19:51.864 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.864 "subtype": "NVMe", 00:19:51.864 "listen_addresses": [ 00:19:51.864 { 00:19:51.864 "trtype": "TCP", 00:19:51.864 "adrfam": "IPv4", 00:19:51.864 "traddr": "10.0.0.2", 00:19:51.864 "trsvcid": "4420" 00:19:51.864 } 00:19:51.864 ], 00:19:51.864 "allow_any_host": true, 00:19:51.864 "hosts": [], 00:19:51.864 "serial_number": "SPDK00000000000001", 00:19:51.864 "model_number": "SPDK bdev Controller", 00:19:51.864 "max_namespaces": 2, 00:19:51.864 "min_cntlid": 1, 00:19:51.864 "max_cntlid": 65519, 00:19:51.864 "namespaces": [ 00:19:51.864 { 00:19:51.864 "nsid": 1, 00:19:51.864 "bdev_name": "Malloc0", 00:19:51.864 "name": "Malloc0", 00:19:51.864 "nguid": "D5BB68D810DE407B9F759FCB5BB1B188", 00:19:51.864 "uuid": "d5bb68d8-10de-407b-9f75-9fcb5bb1b188" 00:19:51.864 } 00:19:51.864 ] 00:19:51.864 } 00:19:51.864 ] 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3795948 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:51.864 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:51.865 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:19:51.865 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.124 Malloc1 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.124 Asynchronous Event Request test 00:19:52.124 Attaching to 10.0.0.2 00:19:52.124 Attached to 10.0.0.2 00:19:52.124 Registering asynchronous event callbacks... 00:19:52.124 Starting namespace attribute notice tests for all controllers... 00:19:52.124 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:52.124 aer_cb - Changed Namespace 00:19:52.124 Cleaning up... 00:19:52.124 [ 00:19:52.124 { 00:19:52.124 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:52.124 "subtype": "Discovery", 00:19:52.124 "listen_addresses": [], 00:19:52.124 "allow_any_host": true, 00:19:52.124 "hosts": [] 00:19:52.124 }, 00:19:52.124 { 00:19:52.124 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.124 "subtype": "NVMe", 00:19:52.124 "listen_addresses": [ 00:19:52.124 { 00:19:52.124 "trtype": "TCP", 00:19:52.124 "adrfam": "IPv4", 00:19:52.124 "traddr": "10.0.0.2", 00:19:52.124 "trsvcid": "4420" 00:19:52.124 } 00:19:52.124 ], 00:19:52.124 "allow_any_host": true, 00:19:52.124 "hosts": [], 00:19:52.124 "serial_number": "SPDK00000000000001", 00:19:52.124 "model_number": "SPDK bdev Controller", 00:19:52.124 "max_namespaces": 2, 00:19:52.124 "min_cntlid": 1, 00:19:52.124 "max_cntlid": 65519, 00:19:52.124 "namespaces": [ 00:19:52.124 { 00:19:52.124 "nsid": 1, 00:19:52.124 "bdev_name": "Malloc0", 00:19:52.124 "name": "Malloc0", 00:19:52.124 "nguid": "D5BB68D810DE407B9F759FCB5BB1B188", 00:19:52.124 "uuid": "d5bb68d8-10de-407b-9f75-9fcb5bb1b188" 00:19:52.124 }, 00:19:52.124 { 00:19:52.124 "nsid": 2, 00:19:52.124 "bdev_name": "Malloc1", 00:19:52.124 "name": "Malloc1", 00:19:52.124 "nguid": "2E1E02D4EF104EC8BF1E584C9767901B", 00:19:52.124 "uuid": "2e1e02d4-ef10-4ec8-bf1e-584c9767901b" 00:19:52.124 } 00:19:52.124 ] 00:19:52.124 } 00:19:52.124 ] 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3795948 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.124 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.125 rmmod nvme_tcp 00:19:52.125 rmmod nvme_fabrics 00:19:52.125 rmmod nvme_keyring 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3795722 ']' 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3795722 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3795722 ']' 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3795722 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3795722 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3795722' 00:19:52.125 killing process with pid 3795722 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3795722 00:19:52.125 19:26:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3795722 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.383 19:26:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.286 19:26:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:54.287 00:19:54.287 real 0m9.064s 00:19:54.287 user 0m6.605s 00:19:54.287 sys 0m4.511s 00:19:54.287 19:26:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.287 19:26:28 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:54.287 ************************************ 00:19:54.287 END TEST nvmf_aer 00:19:54.287 ************************************ 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.546 ************************************ 00:19:54.546 START TEST nvmf_async_init 00:19:54.546 ************************************ 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:19:54.546 * Looking for test storage... 00:19:54.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.546 --rc genhtml_branch_coverage=1 00:19:54.546 --rc genhtml_function_coverage=1 00:19:54.546 --rc genhtml_legend=1 00:19:54.546 --rc geninfo_all_blocks=1 00:19:54.546 --rc geninfo_unexecuted_blocks=1 00:19:54.546 00:19:54.546 ' 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.546 --rc genhtml_branch_coverage=1 00:19:54.546 --rc genhtml_function_coverage=1 00:19:54.546 --rc genhtml_legend=1 00:19:54.546 --rc geninfo_all_blocks=1 00:19:54.546 --rc geninfo_unexecuted_blocks=1 00:19:54.546 00:19:54.546 ' 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.546 --rc genhtml_branch_coverage=1 00:19:54.546 --rc genhtml_function_coverage=1 00:19:54.546 --rc genhtml_legend=1 00:19:54.546 --rc geninfo_all_blocks=1 00:19:54.546 --rc geninfo_unexecuted_blocks=1 00:19:54.546 00:19:54.546 ' 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.546 --rc genhtml_branch_coverage=1 00:19:54.546 --rc genhtml_function_coverage=1 00:19:54.546 --rc genhtml_legend=1 00:19:54.546 --rc geninfo_all_blocks=1 00:19:54.546 --rc geninfo_unexecuted_blocks=1 00:19:54.546 00:19:54.546 ' 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.546 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:54.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=de9378ba4eb74badaf11a9ff0f1fb33e 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.547 19:26:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:59.829 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:59.829 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:59.829 Found net devices under 0000:31:00.0: cvl_0_0 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.829 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:59.829 Found net devices under 0000:31:00.1: cvl_0_1 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.830 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:00.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:20:00.090 00:20:00.090 --- 10.0.0.2 ping statistics --- 00:20:00.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.090 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:00.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:20:00.090 00:20:00.090 --- 10.0.0.1 ping statistics --- 00:20:00.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.090 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3800415 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3800415 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3800415 ']' 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:00.090 19:26:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:00.090 [2024-11-26 19:26:33.770900] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:20:00.090 [2024-11-26 19:26:33.770949] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.090 [2024-11-26 19:26:33.854614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.090 [2024-11-26 19:26:33.890237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.090 [2024-11-26 19:26:33.890267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.090 [2024-11-26 19:26:33.890275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.090 [2024-11-26 19:26:33.890281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.090 [2024-11-26 19:26:33.890287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.090 [2024-11-26 19:26:33.890855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 [2024-11-26 19:26:34.592826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 null0 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g de9378ba4eb74badaf11a9ff0f1fb33e 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 [2024-11-26 19:26:34.633186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 nvme0n1 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.029 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.029 [ 00:20:01.029 { 00:20:01.029 "name": "nvme0n1", 00:20:01.029 "aliases": [ 00:20:01.029 "de9378ba-4eb7-4bad-af11-a9ff0f1fb33e" 00:20:01.029 ], 00:20:01.029 "product_name": "NVMe disk", 00:20:01.029 "block_size": 512, 00:20:01.029 "num_blocks": 2097152, 00:20:01.029 "uuid": "de9378ba-4eb7-4bad-af11-a9ff0f1fb33e", 00:20:01.029 "numa_id": 0, 00:20:01.029 "assigned_rate_limits": { 00:20:01.029 "rw_ios_per_sec": 0, 00:20:01.029 "rw_mbytes_per_sec": 0, 00:20:01.029 "r_mbytes_per_sec": 0, 00:20:01.029 "w_mbytes_per_sec": 0 00:20:01.029 }, 00:20:01.029 "claimed": false, 00:20:01.030 "zoned": false, 00:20:01.030 "supported_io_types": { 00:20:01.030 "read": true, 00:20:01.030 "write": true, 00:20:01.030 "unmap": false, 00:20:01.030 "flush": true, 00:20:01.030 "reset": true, 00:20:01.030 "nvme_admin": true, 00:20:01.030 "nvme_io": true, 00:20:01.030 "nvme_io_md": false, 00:20:01.030 "write_zeroes": true, 00:20:01.030 "zcopy": false, 00:20:01.030 "get_zone_info": false, 00:20:01.030 "zone_management": false, 00:20:01.030 "zone_append": false, 00:20:01.030 "compare": true, 00:20:01.030 "compare_and_write": true, 00:20:01.030 "abort": true, 00:20:01.030 "seek_hole": false, 00:20:01.030 "seek_data": false, 00:20:01.030 "copy": true, 00:20:01.030 "nvme_iov_md": false 00:20:01.030 }, 00:20:01.030 "memory_domains": [ 00:20:01.030 { 00:20:01.030 "dma_device_id": "system", 00:20:01.030 "dma_device_type": 1 00:20:01.030 } 00:20:01.030 ], 00:20:01.030 "driver_specific": { 00:20:01.030 "nvme": [ 00:20:01.030 { 00:20:01.030 "trid": { 00:20:01.030 "trtype": "TCP", 00:20:01.030 "adrfam": "IPv4", 00:20:01.030 "traddr": "10.0.0.2", 00:20:01.030 "trsvcid": "4420", 00:20:01.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:01.030 }, 00:20:01.030 "ctrlr_data": { 00:20:01.030 "cntlid": 1, 00:20:01.030 "vendor_id": "0x8086", 00:20:01.030 "model_number": "SPDK bdev Controller", 00:20:01.030 "serial_number": "00000000000000000000", 00:20:01.030 "firmware_revision": "25.01", 00:20:01.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.030 "oacs": { 00:20:01.030 "security": 0, 00:20:01.030 "format": 0, 00:20:01.030 "firmware": 0, 00:20:01.030 "ns_manage": 0 00:20:01.030 }, 00:20:01.030 "multi_ctrlr": true, 00:20:01.030 "ana_reporting": false 00:20:01.030 }, 00:20:01.030 "vs": { 00:20:01.030 "nvme_version": "1.3" 00:20:01.030 }, 00:20:01.030 "ns_data": { 00:20:01.030 "id": 1, 00:20:01.030 "can_share": true 00:20:01.030 } 00:20:01.030 } 00:20:01.030 ], 00:20:01.030 "mp_policy": "active_passive" 00:20:01.030 } 00:20:01.030 } 00:20:01.030 ] 00:20:01.030 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.030 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:01.030 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.030 19:26:34 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.030 [2024-11-26 19:26:34.881764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:01.030 [2024-11-26 19:26:34.881850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x73ddd0 (9): Bad file descriptor 00:20:01.289 [2024-11-26 19:26:35.014212] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.289 [ 00:20:01.289 { 00:20:01.289 "name": "nvme0n1", 00:20:01.289 "aliases": [ 00:20:01.289 "de9378ba-4eb7-4bad-af11-a9ff0f1fb33e" 00:20:01.289 ], 00:20:01.289 "product_name": "NVMe disk", 00:20:01.289 "block_size": 512, 00:20:01.289 "num_blocks": 2097152, 00:20:01.289 "uuid": "de9378ba-4eb7-4bad-af11-a9ff0f1fb33e", 00:20:01.289 "numa_id": 0, 00:20:01.289 "assigned_rate_limits": { 00:20:01.289 "rw_ios_per_sec": 0, 00:20:01.289 "rw_mbytes_per_sec": 0, 00:20:01.289 "r_mbytes_per_sec": 0, 00:20:01.289 "w_mbytes_per_sec": 0 00:20:01.289 }, 00:20:01.289 "claimed": false, 00:20:01.289 "zoned": false, 00:20:01.289 "supported_io_types": { 00:20:01.289 "read": true, 00:20:01.289 "write": true, 00:20:01.289 "unmap": false, 00:20:01.289 "flush": true, 00:20:01.289 "reset": true, 00:20:01.289 "nvme_admin": true, 00:20:01.289 "nvme_io": true, 00:20:01.289 "nvme_io_md": false, 00:20:01.289 "write_zeroes": true, 00:20:01.289 "zcopy": false, 00:20:01.289 "get_zone_info": false, 00:20:01.289 "zone_management": false, 00:20:01.289 "zone_append": false, 00:20:01.289 "compare": true, 00:20:01.289 "compare_and_write": true, 00:20:01.289 "abort": true, 00:20:01.289 "seek_hole": false, 00:20:01.289 "seek_data": false, 00:20:01.289 "copy": true, 00:20:01.289 "nvme_iov_md": false 00:20:01.289 }, 00:20:01.289 "memory_domains": [ 00:20:01.289 { 00:20:01.289 "dma_device_id": "system", 00:20:01.289 "dma_device_type": 1 00:20:01.289 } 00:20:01.289 ], 00:20:01.289 "driver_specific": { 00:20:01.289 "nvme": [ 00:20:01.289 { 00:20:01.289 "trid": { 00:20:01.289 "trtype": "TCP", 00:20:01.289 "adrfam": "IPv4", 00:20:01.289 "traddr": "10.0.0.2", 00:20:01.289 "trsvcid": "4420", 00:20:01.289 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:01.289 }, 00:20:01.289 "ctrlr_data": { 00:20:01.289 "cntlid": 2, 00:20:01.289 "vendor_id": "0x8086", 00:20:01.289 "model_number": "SPDK bdev Controller", 00:20:01.289 "serial_number": "00000000000000000000", 00:20:01.289 "firmware_revision": "25.01", 00:20:01.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.289 "oacs": { 00:20:01.289 "security": 0, 00:20:01.289 "format": 0, 00:20:01.289 "firmware": 0, 00:20:01.289 "ns_manage": 0 00:20:01.289 }, 00:20:01.289 "multi_ctrlr": true, 00:20:01.289 "ana_reporting": false 00:20:01.289 }, 00:20:01.289 "vs": { 00:20:01.289 "nvme_version": "1.3" 00:20:01.289 }, 00:20:01.289 "ns_data": { 00:20:01.289 "id": 1, 00:20:01.289 "can_share": true 00:20:01.289 } 00:20:01.289 } 00:20:01.289 ], 00:20:01.289 "mp_policy": "active_passive" 00:20:01.289 } 00:20:01.289 } 00:20:01.289 ] 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.tE4E99C6vl 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.tE4E99C6vl 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.tE4E99C6vl 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.289 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.290 [2024-11-26 19:26:35.074376] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:01.290 [2024-11-26 19:26:35.074544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.290 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.290 [2024-11-26 19:26:35.090435] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:01.548 nvme0n1 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.548 [ 00:20:01.548 { 00:20:01.548 "name": "nvme0n1", 00:20:01.548 "aliases": [ 00:20:01.548 "de9378ba-4eb7-4bad-af11-a9ff0f1fb33e" 00:20:01.548 ], 00:20:01.548 "product_name": "NVMe disk", 00:20:01.548 "block_size": 512, 00:20:01.548 "num_blocks": 2097152, 00:20:01.548 "uuid": "de9378ba-4eb7-4bad-af11-a9ff0f1fb33e", 00:20:01.548 "numa_id": 0, 00:20:01.548 "assigned_rate_limits": { 00:20:01.548 "rw_ios_per_sec": 0, 00:20:01.548 "rw_mbytes_per_sec": 0, 00:20:01.548 "r_mbytes_per_sec": 0, 00:20:01.548 "w_mbytes_per_sec": 0 00:20:01.548 }, 00:20:01.548 "claimed": false, 00:20:01.548 "zoned": false, 00:20:01.548 "supported_io_types": { 00:20:01.548 "read": true, 00:20:01.548 "write": true, 00:20:01.548 "unmap": false, 00:20:01.548 "flush": true, 00:20:01.548 "reset": true, 00:20:01.548 "nvme_admin": true, 00:20:01.548 "nvme_io": true, 00:20:01.548 "nvme_io_md": false, 00:20:01.548 "write_zeroes": true, 00:20:01.548 "zcopy": false, 00:20:01.548 "get_zone_info": false, 00:20:01.548 "zone_management": false, 00:20:01.548 "zone_append": false, 00:20:01.548 "compare": true, 00:20:01.548 "compare_and_write": true, 00:20:01.548 "abort": true, 00:20:01.548 "seek_hole": false, 00:20:01.548 "seek_data": false, 00:20:01.548 "copy": true, 00:20:01.548 "nvme_iov_md": false 00:20:01.548 }, 00:20:01.548 "memory_domains": [ 00:20:01.548 { 00:20:01.548 "dma_device_id": "system", 00:20:01.548 "dma_device_type": 1 00:20:01.548 } 00:20:01.548 ], 00:20:01.548 "driver_specific": { 00:20:01.548 "nvme": [ 00:20:01.548 { 00:20:01.548 "trid": { 00:20:01.548 "trtype": "TCP", 00:20:01.548 "adrfam": "IPv4", 00:20:01.548 "traddr": "10.0.0.2", 00:20:01.548 "trsvcid": "4421", 00:20:01.548 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:01.548 }, 00:20:01.548 "ctrlr_data": { 00:20:01.548 "cntlid": 3, 00:20:01.548 "vendor_id": "0x8086", 00:20:01.548 "model_number": "SPDK bdev Controller", 00:20:01.548 "serial_number": "00000000000000000000", 00:20:01.548 "firmware_revision": "25.01", 00:20:01.548 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.548 "oacs": { 00:20:01.548 "security": 0, 00:20:01.548 "format": 0, 00:20:01.548 "firmware": 0, 00:20:01.548 "ns_manage": 0 00:20:01.548 }, 00:20:01.548 "multi_ctrlr": true, 00:20:01.548 "ana_reporting": false 00:20:01.548 }, 00:20:01.548 "vs": { 00:20:01.548 "nvme_version": "1.3" 00:20:01.548 }, 00:20:01.548 "ns_data": { 00:20:01.548 "id": 1, 00:20:01.548 "can_share": true 00:20:01.548 } 00:20:01.548 } 00:20:01.548 ], 00:20:01.548 "mp_policy": "active_passive" 00:20:01.548 } 00:20:01.548 } 00:20:01.548 ] 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.tE4E99C6vl 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:01.548 rmmod nvme_tcp 00:20:01.548 rmmod nvme_fabrics 00:20:01.548 rmmod nvme_keyring 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3800415 ']' 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3800415 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3800415 ']' 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3800415 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3800415 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3800415' 00:20:01.548 killing process with pid 3800415 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3800415 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3800415 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:01.548 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:20:01.806 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:01.806 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:01.806 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.806 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.806 19:26:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:03.708 00:20:03.708 real 0m9.284s 00:20:03.708 user 0m3.269s 00:20:03.708 sys 0m4.373s 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.708 ************************************ 00:20:03.708 END TEST nvmf_async_init 00:20:03.708 ************************************ 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.708 ************************************ 00:20:03.708 START TEST dma 00:20:03.708 ************************************ 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:03.708 * Looking for test storage... 00:20:03.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:03.708 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.968 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.969 --rc genhtml_branch_coverage=1 00:20:03.969 --rc genhtml_function_coverage=1 00:20:03.969 --rc genhtml_legend=1 00:20:03.969 --rc geninfo_all_blocks=1 00:20:03.969 --rc geninfo_unexecuted_blocks=1 00:20:03.969 00:20:03.969 ' 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.969 --rc genhtml_branch_coverage=1 00:20:03.969 --rc genhtml_function_coverage=1 00:20:03.969 --rc genhtml_legend=1 00:20:03.969 --rc geninfo_all_blocks=1 00:20:03.969 --rc geninfo_unexecuted_blocks=1 00:20:03.969 00:20:03.969 ' 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.969 --rc genhtml_branch_coverage=1 00:20:03.969 --rc genhtml_function_coverage=1 00:20:03.969 --rc genhtml_legend=1 00:20:03.969 --rc geninfo_all_blocks=1 00:20:03.969 --rc geninfo_unexecuted_blocks=1 00:20:03.969 00:20:03.969 ' 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:03.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.969 --rc genhtml_branch_coverage=1 00:20:03.969 --rc genhtml_function_coverage=1 00:20:03.969 --rc genhtml_legend=1 00:20:03.969 --rc geninfo_all_blocks=1 00:20:03.969 --rc geninfo_unexecuted_blocks=1 00:20:03.969 00:20:03.969 ' 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.969 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:03.970 00:20:03.970 real 0m0.138s 00:20:03.970 user 0m0.084s 00:20:03.970 sys 0m0.060s 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:03.970 ************************************ 00:20:03.970 END TEST dma 00:20:03.970 ************************************ 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:03.970 ************************************ 00:20:03.970 START TEST nvmf_identify 00:20:03.970 ************************************ 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:03.970 * Looking for test storage... 00:20:03.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:03.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.970 --rc genhtml_branch_coverage=1 00:20:03.970 --rc genhtml_function_coverage=1 00:20:03.970 --rc genhtml_legend=1 00:20:03.970 --rc geninfo_all_blocks=1 00:20:03.970 --rc geninfo_unexecuted_blocks=1 00:20:03.970 00:20:03.970 ' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:03.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.970 --rc genhtml_branch_coverage=1 00:20:03.970 --rc genhtml_function_coverage=1 00:20:03.970 --rc genhtml_legend=1 00:20:03.970 --rc geninfo_all_blocks=1 00:20:03.970 --rc geninfo_unexecuted_blocks=1 00:20:03.970 00:20:03.970 ' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:03.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.970 --rc genhtml_branch_coverage=1 00:20:03.970 --rc genhtml_function_coverage=1 00:20:03.970 --rc genhtml_legend=1 00:20:03.970 --rc geninfo_all_blocks=1 00:20:03.970 --rc geninfo_unexecuted_blocks=1 00:20:03.970 00:20:03.970 ' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:03.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:03.970 --rc genhtml_branch_coverage=1 00:20:03.970 --rc genhtml_function_coverage=1 00:20:03.970 --rc genhtml_legend=1 00:20:03.970 --rc geninfo_all_blocks=1 00:20:03.970 --rc geninfo_unexecuted_blocks=1 00:20:03.970 00:20:03.970 ' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.970 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:03.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.971 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.232 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:04.232 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:04.232 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:20:04.232 19:26:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.500 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:09.501 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:09.501 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:09.501 Found net devices under 0000:31:00.0: cvl_0_0 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:09.501 Found net devices under 0000:31:00.1: cvl_0_1 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:20:09.501 00:20:09.501 --- 10.0.0.2 ping statistics --- 00:20:09.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.501 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:20:09.501 00:20:09.501 --- 10.0.0.1 ping statistics --- 00:20:09.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.501 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.501 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3805166 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3805166 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3805166 ']' 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.502 19:26:43 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:09.502 [2024-11-26 19:26:43.350089] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:20:09.502 [2024-11-26 19:26:43.350149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.761 [2024-11-26 19:26:43.436628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.761 [2024-11-26 19:26:43.474684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.761 [2024-11-26 19:26:43.474716] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.761 [2024-11-26 19:26:43.474724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.761 [2024-11-26 19:26:43.474731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.761 [2024-11-26 19:26:43.474737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.761 [2024-11-26 19:26:43.476559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.761 [2024-11-26 19:26:43.476678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.761 [2024-11-26 19:26:43.476830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.761 [2024-11-26 19:26:43.476832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.329 [2024-11-26 19:26:44.132032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.329 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.589 Malloc0 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.589 [2024-11-26 19:26:44.221990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.589 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.590 [ 00:20:10.590 { 00:20:10.590 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:10.590 "subtype": "Discovery", 00:20:10.590 "listen_addresses": [ 00:20:10.590 { 00:20:10.590 "trtype": "TCP", 00:20:10.590 "adrfam": "IPv4", 00:20:10.590 "traddr": "10.0.0.2", 00:20:10.590 "trsvcid": "4420" 00:20:10.590 } 00:20:10.590 ], 00:20:10.590 "allow_any_host": true, 00:20:10.590 "hosts": [] 00:20:10.590 }, 00:20:10.590 { 00:20:10.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.590 "subtype": "NVMe", 00:20:10.590 "listen_addresses": [ 00:20:10.590 { 00:20:10.590 "trtype": "TCP", 00:20:10.590 "adrfam": "IPv4", 00:20:10.590 "traddr": "10.0.0.2", 00:20:10.590 "trsvcid": "4420" 00:20:10.590 } 00:20:10.590 ], 00:20:10.590 "allow_any_host": true, 00:20:10.590 "hosts": [], 00:20:10.590 "serial_number": "SPDK00000000000001", 00:20:10.590 "model_number": "SPDK bdev Controller", 00:20:10.590 "max_namespaces": 32, 00:20:10.590 "min_cntlid": 1, 00:20:10.590 "max_cntlid": 65519, 00:20:10.590 "namespaces": [ 00:20:10.590 { 00:20:10.590 "nsid": 1, 00:20:10.590 "bdev_name": "Malloc0", 00:20:10.590 "name": "Malloc0", 00:20:10.590 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:10.590 "eui64": "ABCDEF0123456789", 00:20:10.590 "uuid": "c547a7a8-222c-4e17-ba3b-780ab01f95c3" 00:20:10.590 } 00:20:10.590 ] 00:20:10.590 } 00:20:10.590 ] 00:20:10.590 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.590 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:10.590 [2024-11-26 19:26:44.258046] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:20:10.590 [2024-11-26 19:26:44.258076] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805369 ] 00:20:10.590 [2024-11-26 19:26:44.312317] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:10.590 [2024-11-26 19:26:44.312368] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:10.590 [2024-11-26 19:26:44.312373] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:10.590 [2024-11-26 19:26:44.312390] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:10.590 [2024-11-26 19:26:44.312399] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:10.590 [2024-11-26 19:26:44.313175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:10.590 [2024-11-26 19:26:44.313210] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x709550 0 00:20:10.590 [2024-11-26 19:26:44.319111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:10.590 [2024-11-26 19:26:44.319126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:10.590 [2024-11-26 19:26:44.319130] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:10.590 [2024-11-26 19:26:44.319134] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:10.590 [2024-11-26 19:26:44.319176] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.319182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.319187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.590 [2024-11-26 19:26:44.319201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:10.590 [2024-11-26 19:26:44.319221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.590 [2024-11-26 19:26:44.327113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.590 [2024-11-26 19:26:44.327123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.590 [2024-11-26 19:26:44.327127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.590 [2024-11-26 19:26:44.327142] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:10.590 [2024-11-26 19:26:44.327150] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:10.590 [2024-11-26 19:26:44.327156] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:10.590 [2024-11-26 19:26:44.327171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.590 [2024-11-26 19:26:44.327187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.590 [2024-11-26 19:26:44.327202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.590 [2024-11-26 19:26:44.327395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.590 [2024-11-26 19:26:44.327402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.590 [2024-11-26 19:26:44.327405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327409] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.590 [2024-11-26 19:26:44.327417] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:10.590 [2024-11-26 19:26:44.327425] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:10.590 [2024-11-26 19:26:44.327432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.590 [2024-11-26 19:26:44.327446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.590 [2024-11-26 19:26:44.327457] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.590 [2024-11-26 19:26:44.327637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.590 [2024-11-26 19:26:44.327644] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.590 [2024-11-26 19:26:44.327647] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.590 [2024-11-26 19:26:44.327657] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:10.590 [2024-11-26 19:26:44.327665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:10.590 [2024-11-26 19:26:44.327675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.590 [2024-11-26 19:26:44.327689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.590 [2024-11-26 19:26:44.327700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.590 [2024-11-26 19:26:44.327884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.590 [2024-11-26 19:26:44.327892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.590 [2024-11-26 19:26:44.327895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.590 [2024-11-26 19:26:44.327904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:10.590 [2024-11-26 19:26:44.327913] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327917] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.327921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.590 [2024-11-26 19:26:44.327928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.590 [2024-11-26 19:26:44.327937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.590 [2024-11-26 19:26:44.328153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.590 [2024-11-26 19:26:44.328160] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.590 [2024-11-26 19:26:44.328163] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.328167] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.590 [2024-11-26 19:26:44.328172] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:10.590 [2024-11-26 19:26:44.328178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:10.590 [2024-11-26 19:26:44.328185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:10.590 [2024-11-26 19:26:44.328297] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:10.590 [2024-11-26 19:26:44.328302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:10.590 [2024-11-26 19:26:44.328311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.328314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.590 [2024-11-26 19:26:44.328318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.590 [2024-11-26 19:26:44.328325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.590 [2024-11-26 19:26:44.328335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.590 [2024-11-26 19:26:44.328529] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.590 [2024-11-26 19:26:44.328535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.590 [2024-11-26 19:26:44.328539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.328543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.591 [2024-11-26 19:26:44.328551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:10.591 [2024-11-26 19:26:44.328560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.328564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.328568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.328575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.591 [2024-11-26 19:26:44.328585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.591 [2024-11-26 19:26:44.328786] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.591 [2024-11-26 19:26:44.328792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.591 [2024-11-26 19:26:44.328796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.328800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.591 [2024-11-26 19:26:44.328804] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:10.591 [2024-11-26 19:26:44.328809] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:10.591 [2024-11-26 19:26:44.328817] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:10.591 [2024-11-26 19:26:44.328825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:10.591 [2024-11-26 19:26:44.328834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.328837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.328844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.591 [2024-11-26 19:26:44.328855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.591 [2024-11-26 19:26:44.329092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.591 [2024-11-26 19:26:44.329098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.591 [2024-11-26 19:26:44.329110] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329114] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x709550): datao=0, datal=4096, cccid=0 00:20:10.591 [2024-11-26 19:26:44.329119] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x76b100) on tqpair(0x709550): expected_datao=0, payload_size=4096 00:20:10.591 [2024-11-26 19:26:44.329124] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329136] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329141] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329306] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.591 [2024-11-26 19:26:44.329312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.591 [2024-11-26 19:26:44.329316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.591 [2024-11-26 19:26:44.329328] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:10.591 [2024-11-26 19:26:44.329333] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:10.591 [2024-11-26 19:26:44.329337] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:10.591 [2024-11-26 19:26:44.329345] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:10.591 [2024-11-26 19:26:44.329350] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:10.591 [2024-11-26 19:26:44.329355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:10.591 [2024-11-26 19:26:44.329364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:10.591 [2024-11-26 19:26:44.329370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.329386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.591 [2024-11-26 19:26:44.329397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.591 [2024-11-26 19:26:44.329578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.591 [2024-11-26 19:26:44.329584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.591 [2024-11-26 19:26:44.329588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.591 [2024-11-26 19:26:44.329600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.329613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.591 [2024-11-26 19:26:44.329620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329623] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.329633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.591 [2024-11-26 19:26:44.329639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329643] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.329652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.591 [2024-11-26 19:26:44.329658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.329671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.591 [2024-11-26 19:26:44.329676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:10.591 [2024-11-26 19:26:44.329687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:10.591 [2024-11-26 19:26:44.329693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.329705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.591 [2024-11-26 19:26:44.329717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b100, cid 0, qid 0 00:20:10.591 [2024-11-26 19:26:44.329722] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b280, cid 1, qid 0 00:20:10.591 [2024-11-26 19:26:44.329727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b400, cid 2, qid 0 00:20:10.591 [2024-11-26 19:26:44.329732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.591 [2024-11-26 19:26:44.329737] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b700, cid 4, qid 0 00:20:10.591 [2024-11-26 19:26:44.329985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.591 [2024-11-26 19:26:44.329992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.591 [2024-11-26 19:26:44.329995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.329999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b700) on tqpair=0x709550 00:20:10.591 [2024-11-26 19:26:44.330004] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:10.591 [2024-11-26 19:26:44.330009] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:10.591 [2024-11-26 19:26:44.330019] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.330023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x709550) 00:20:10.591 [2024-11-26 19:26:44.330030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.591 [2024-11-26 19:26:44.330040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b700, cid 4, qid 0 00:20:10.591 [2024-11-26 19:26:44.330262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.591 [2024-11-26 19:26:44.330270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.591 [2024-11-26 19:26:44.330274] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.330277] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x709550): datao=0, datal=4096, cccid=4 00:20:10.591 [2024-11-26 19:26:44.330282] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x76b700) on tqpair(0x709550): expected_datao=0, payload_size=4096 00:20:10.591 [2024-11-26 19:26:44.330286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.330297] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.330301] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.375111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.591 [2024-11-26 19:26:44.375122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.591 [2024-11-26 19:26:44.375125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.375129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b700) on tqpair=0x709550 00:20:10.591 [2024-11-26 19:26:44.375143] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:10.591 [2024-11-26 19:26:44.375169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.591 [2024-11-26 19:26:44.375174] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x709550) 00:20:10.592 [2024-11-26 19:26:44.375182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.592 [2024-11-26 19:26:44.375189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.375193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.375199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x709550) 00:20:10.592 [2024-11-26 19:26:44.375205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.592 [2024-11-26 19:26:44.375221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b700, cid 4, qid 0 00:20:10.592 [2024-11-26 19:26:44.375227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b880, cid 5, qid 0 00:20:10.592 [2024-11-26 19:26:44.375442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.592 [2024-11-26 19:26:44.375449] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.592 [2024-11-26 19:26:44.375452] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.375456] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x709550): datao=0, datal=1024, cccid=4 00:20:10.592 [2024-11-26 19:26:44.375460] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x76b700) on tqpair(0x709550): expected_datao=0, payload_size=1024 00:20:10.592 [2024-11-26 19:26:44.375465] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.375471] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.375475] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.375481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.592 [2024-11-26 19:26:44.375487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.592 [2024-11-26 19:26:44.375490] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.375494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b880) on tqpair=0x709550 00:20:10.592 [2024-11-26 19:26:44.417324] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.592 [2024-11-26 19:26:44.417340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.592 [2024-11-26 19:26:44.417344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.417348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b700) on tqpair=0x709550 00:20:10.592 [2024-11-26 19:26:44.417364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.417368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x709550) 00:20:10.592 [2024-11-26 19:26:44.417377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.592 [2024-11-26 19:26:44.417395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b700, cid 4, qid 0 00:20:10.592 [2024-11-26 19:26:44.417590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.592 [2024-11-26 19:26:44.417597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.592 [2024-11-26 19:26:44.417600] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.417604] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x709550): datao=0, datal=3072, cccid=4 00:20:10.592 [2024-11-26 19:26:44.417608] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x76b700) on tqpair(0x709550): expected_datao=0, payload_size=3072 00:20:10.592 [2024-11-26 19:26:44.417613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.417625] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.417629] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.417812] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.592 [2024-11-26 19:26:44.417819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.592 [2024-11-26 19:26:44.417822] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.417826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b700) on tqpair=0x709550 00:20:10.592 [2024-11-26 19:26:44.417834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.417843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x709550) 00:20:10.592 [2024-11-26 19:26:44.417849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.592 [2024-11-26 19:26:44.417864] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b700, cid 4, qid 0 00:20:10.592 [2024-11-26 19:26:44.418090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.592 [2024-11-26 19:26:44.418096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.592 [2024-11-26 19:26:44.418106] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.418110] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x709550): datao=0, datal=8, cccid=4 00:20:10.592 [2024-11-26 19:26:44.418114] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x76b700) on tqpair(0x709550): expected_datao=0, payload_size=8 00:20:10.592 [2024-11-26 19:26:44.418119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.418125] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.592 [2024-11-26 19:26:44.418129] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.857 [2024-11-26 19:26:44.462116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.857 [2024-11-26 19:26:44.462129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.857 [2024-11-26 19:26:44.462132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.857 [2024-11-26 19:26:44.462136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b700) on tqpair=0x709550 00:20:10.857 ===================================================== 00:20:10.857 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:10.857 ===================================================== 00:20:10.857 Controller Capabilities/Features 00:20:10.857 ================================ 00:20:10.857 Vendor ID: 0000 00:20:10.857 Subsystem Vendor ID: 0000 00:20:10.857 Serial Number: .................... 00:20:10.857 Model Number: ........................................ 00:20:10.857 Firmware Version: 25.01 00:20:10.857 Recommended Arb Burst: 0 00:20:10.857 IEEE OUI Identifier: 00 00 00 00:20:10.857 Multi-path I/O 00:20:10.857 May have multiple subsystem ports: No 00:20:10.857 May have multiple controllers: No 00:20:10.857 Associated with SR-IOV VF: No 00:20:10.857 Max Data Transfer Size: 131072 00:20:10.857 Max Number of Namespaces: 0 00:20:10.857 Max Number of I/O Queues: 1024 00:20:10.857 NVMe Specification Version (VS): 1.3 00:20:10.857 NVMe Specification Version (Identify): 1.3 00:20:10.857 Maximum Queue Entries: 128 00:20:10.857 Contiguous Queues Required: Yes 00:20:10.857 Arbitration Mechanisms Supported 00:20:10.857 Weighted Round Robin: Not Supported 00:20:10.857 Vendor Specific: Not Supported 00:20:10.857 Reset Timeout: 15000 ms 00:20:10.857 Doorbell Stride: 4 bytes 00:20:10.857 NVM Subsystem Reset: Not Supported 00:20:10.857 Command Sets Supported 00:20:10.857 NVM Command Set: Supported 00:20:10.857 Boot Partition: Not Supported 00:20:10.857 Memory Page Size Minimum: 4096 bytes 00:20:10.857 Memory Page Size Maximum: 4096 bytes 00:20:10.857 Persistent Memory Region: Not Supported 00:20:10.857 Optional Asynchronous Events Supported 00:20:10.857 Namespace Attribute Notices: Not Supported 00:20:10.857 Firmware Activation Notices: Not Supported 00:20:10.857 ANA Change Notices: Not Supported 00:20:10.857 PLE Aggregate Log Change Notices: Not Supported 00:20:10.857 LBA Status Info Alert Notices: Not Supported 00:20:10.857 EGE Aggregate Log Change Notices: Not Supported 00:20:10.857 Normal NVM Subsystem Shutdown event: Not Supported 00:20:10.857 Zone Descriptor Change Notices: Not Supported 00:20:10.857 Discovery Log Change Notices: Supported 00:20:10.857 Controller Attributes 00:20:10.857 128-bit Host Identifier: Not Supported 00:20:10.857 Non-Operational Permissive Mode: Not Supported 00:20:10.857 NVM Sets: Not Supported 00:20:10.857 Read Recovery Levels: Not Supported 00:20:10.857 Endurance Groups: Not Supported 00:20:10.857 Predictable Latency Mode: Not Supported 00:20:10.857 Traffic Based Keep ALive: Not Supported 00:20:10.857 Namespace Granularity: Not Supported 00:20:10.857 SQ Associations: Not Supported 00:20:10.857 UUID List: Not Supported 00:20:10.857 Multi-Domain Subsystem: Not Supported 00:20:10.857 Fixed Capacity Management: Not Supported 00:20:10.857 Variable Capacity Management: Not Supported 00:20:10.857 Delete Endurance Group: Not Supported 00:20:10.857 Delete NVM Set: Not Supported 00:20:10.857 Extended LBA Formats Supported: Not Supported 00:20:10.857 Flexible Data Placement Supported: Not Supported 00:20:10.857 00:20:10.857 Controller Memory Buffer Support 00:20:10.857 ================================ 00:20:10.857 Supported: No 00:20:10.857 00:20:10.857 Persistent Memory Region Support 00:20:10.857 ================================ 00:20:10.857 Supported: No 00:20:10.857 00:20:10.857 Admin Command Set Attributes 00:20:10.857 ============================ 00:20:10.857 Security Send/Receive: Not Supported 00:20:10.857 Format NVM: Not Supported 00:20:10.857 Firmware Activate/Download: Not Supported 00:20:10.857 Namespace Management: Not Supported 00:20:10.857 Device Self-Test: Not Supported 00:20:10.857 Directives: Not Supported 00:20:10.857 NVMe-MI: Not Supported 00:20:10.857 Virtualization Management: Not Supported 00:20:10.857 Doorbell Buffer Config: Not Supported 00:20:10.857 Get LBA Status Capability: Not Supported 00:20:10.857 Command & Feature Lockdown Capability: Not Supported 00:20:10.857 Abort Command Limit: 1 00:20:10.857 Async Event Request Limit: 4 00:20:10.857 Number of Firmware Slots: N/A 00:20:10.857 Firmware Slot 1 Read-Only: N/A 00:20:10.858 Firmware Activation Without Reset: N/A 00:20:10.858 Multiple Update Detection Support: N/A 00:20:10.858 Firmware Update Granularity: No Information Provided 00:20:10.858 Per-Namespace SMART Log: No 00:20:10.858 Asymmetric Namespace Access Log Page: Not Supported 00:20:10.858 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:10.858 Command Effects Log Page: Not Supported 00:20:10.858 Get Log Page Extended Data: Supported 00:20:10.858 Telemetry Log Pages: Not Supported 00:20:10.858 Persistent Event Log Pages: Not Supported 00:20:10.858 Supported Log Pages Log Page: May Support 00:20:10.858 Commands Supported & Effects Log Page: Not Supported 00:20:10.858 Feature Identifiers & Effects Log Page:May Support 00:20:10.858 NVMe-MI Commands & Effects Log Page: May Support 00:20:10.858 Data Area 4 for Telemetry Log: Not Supported 00:20:10.858 Error Log Page Entries Supported: 128 00:20:10.858 Keep Alive: Not Supported 00:20:10.858 00:20:10.858 NVM Command Set Attributes 00:20:10.858 ========================== 00:20:10.858 Submission Queue Entry Size 00:20:10.858 Max: 1 00:20:10.858 Min: 1 00:20:10.858 Completion Queue Entry Size 00:20:10.858 Max: 1 00:20:10.858 Min: 1 00:20:10.858 Number of Namespaces: 0 00:20:10.858 Compare Command: Not Supported 00:20:10.858 Write Uncorrectable Command: Not Supported 00:20:10.858 Dataset Management Command: Not Supported 00:20:10.858 Write Zeroes Command: Not Supported 00:20:10.858 Set Features Save Field: Not Supported 00:20:10.858 Reservations: Not Supported 00:20:10.858 Timestamp: Not Supported 00:20:10.858 Copy: Not Supported 00:20:10.858 Volatile Write Cache: Not Present 00:20:10.858 Atomic Write Unit (Normal): 1 00:20:10.858 Atomic Write Unit (PFail): 1 00:20:10.858 Atomic Compare & Write Unit: 1 00:20:10.858 Fused Compare & Write: Supported 00:20:10.858 Scatter-Gather List 00:20:10.858 SGL Command Set: Supported 00:20:10.858 SGL Keyed: Supported 00:20:10.858 SGL Bit Bucket Descriptor: Not Supported 00:20:10.858 SGL Metadata Pointer: Not Supported 00:20:10.858 Oversized SGL: Not Supported 00:20:10.858 SGL Metadata Address: Not Supported 00:20:10.858 SGL Offset: Supported 00:20:10.858 Transport SGL Data Block: Not Supported 00:20:10.858 Replay Protected Memory Block: Not Supported 00:20:10.858 00:20:10.858 Firmware Slot Information 00:20:10.858 ========================= 00:20:10.858 Active slot: 0 00:20:10.858 00:20:10.858 00:20:10.858 Error Log 00:20:10.858 ========= 00:20:10.858 00:20:10.858 Active Namespaces 00:20:10.858 ================= 00:20:10.858 Discovery Log Page 00:20:10.858 ================== 00:20:10.858 Generation Counter: 2 00:20:10.858 Number of Records: 2 00:20:10.858 Record Format: 0 00:20:10.858 00:20:10.858 Discovery Log Entry 0 00:20:10.858 ---------------------- 00:20:10.858 Transport Type: 3 (TCP) 00:20:10.858 Address Family: 1 (IPv4) 00:20:10.858 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:10.858 Entry Flags: 00:20:10.858 Duplicate Returned Information: 1 00:20:10.858 Explicit Persistent Connection Support for Discovery: 1 00:20:10.858 Transport Requirements: 00:20:10.858 Secure Channel: Not Required 00:20:10.858 Port ID: 0 (0x0000) 00:20:10.858 Controller ID: 65535 (0xffff) 00:20:10.858 Admin Max SQ Size: 128 00:20:10.858 Transport Service Identifier: 4420 00:20:10.858 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:10.858 Transport Address: 10.0.0.2 00:20:10.858 Discovery Log Entry 1 00:20:10.858 ---------------------- 00:20:10.858 Transport Type: 3 (TCP) 00:20:10.858 Address Family: 1 (IPv4) 00:20:10.858 Subsystem Type: 2 (NVM Subsystem) 00:20:10.858 Entry Flags: 00:20:10.858 Duplicate Returned Information: 0 00:20:10.858 Explicit Persistent Connection Support for Discovery: 0 00:20:10.858 Transport Requirements: 00:20:10.858 Secure Channel: Not Required 00:20:10.858 Port ID: 0 (0x0000) 00:20:10.858 Controller ID: 65535 (0xffff) 00:20:10.858 Admin Max SQ Size: 128 00:20:10.858 Transport Service Identifier: 4420 00:20:10.858 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:10.858 Transport Address: 10.0.0.2 [2024-11-26 19:26:44.462244] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:10.858 [2024-11-26 19:26:44.462258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b100) on tqpair=0x709550 00:20:10.858 [2024-11-26 19:26:44.462265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.858 [2024-11-26 19:26:44.462271] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b280) on tqpair=0x709550 00:20:10.858 [2024-11-26 19:26:44.462276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.858 [2024-11-26 19:26:44.462281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b400) on tqpair=0x709550 00:20:10.858 [2024-11-26 19:26:44.462286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.858 [2024-11-26 19:26:44.462291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.858 [2024-11-26 19:26:44.462295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.858 [2024-11-26 19:26:44.462306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.858 [2024-11-26 19:26:44.462310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.858 [2024-11-26 19:26:44.462313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.858 [2024-11-26 19:26:44.462322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.858 [2024-11-26 19:26:44.462337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.858 [2024-11-26 19:26:44.462579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.858 [2024-11-26 19:26:44.462586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.858 [2024-11-26 19:26:44.462589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.858 [2024-11-26 19:26:44.462593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.858 [2024-11-26 19:26:44.462601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.858 [2024-11-26 19:26:44.462608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.858 [2024-11-26 19:26:44.462612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.858 [2024-11-26 19:26:44.462618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.858 [2024-11-26 19:26:44.462632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.858 [2024-11-26 19:26:44.462817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.858 [2024-11-26 19:26:44.462823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.858 [2024-11-26 19:26:44.462826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.858 [2024-11-26 19:26:44.462830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.858 [2024-11-26 19:26:44.462836] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:10.858 [2024-11-26 19:26:44.462841] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:10.858 [2024-11-26 19:26:44.462852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.858 [2024-11-26 19:26:44.462856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.858 [2024-11-26 19:26:44.462860] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.462866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.462877] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.463051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.463060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.463063] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463067] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.463078] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463082] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463086] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.463093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.463124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.463327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.463334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.463338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.463352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.463366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.463376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.463597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.463604] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.463607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.463624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.463638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.463648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.463859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.463865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.463869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.463883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.463890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.463897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.463907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.464115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.464122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.464126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.464139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464143] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.464154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.464164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.464381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.464387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.464390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.464404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.464418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.464428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.464634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.464640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.464644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.464658] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464668] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.464674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.464684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.464889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.464897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.464901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.464915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.464923] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.464929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.464939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.465172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.465178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.465182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.465186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.465195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.465199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.465203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.465209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.465220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.465420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.465429] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.859 [2024-11-26 19:26:44.465432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.465436] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.859 [2024-11-26 19:26:44.465446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.465450] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.859 [2024-11-26 19:26:44.465454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.859 [2024-11-26 19:26:44.465460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.859 [2024-11-26 19:26:44.465470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.859 [2024-11-26 19:26:44.465667] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.859 [2024-11-26 19:26:44.465673] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.860 [2024-11-26 19:26:44.465677] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.465681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.860 [2024-11-26 19:26:44.465690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.465694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.465700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.860 [2024-11-26 19:26:44.465707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.860 [2024-11-26 19:26:44.465717] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.860 [2024-11-26 19:26:44.465914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.860 [2024-11-26 19:26:44.465920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.860 [2024-11-26 19:26:44.465923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.465927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.860 [2024-11-26 19:26:44.465937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.465941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.465945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x709550) 00:20:10.860 [2024-11-26 19:26:44.465951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.860 [2024-11-26 19:26:44.465961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x76b580, cid 3, qid 0 00:20:10.860 [2024-11-26 19:26:44.470113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.860 [2024-11-26 19:26:44.470122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.860 [2024-11-26 19:26:44.470126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.470130] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x76b580) on tqpair=0x709550 00:20:10.860 [2024-11-26 19:26:44.470138] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:20:10.860 00:20:10.860 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:10.860 [2024-11-26 19:26:44.503805] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:20:10.860 [2024-11-26 19:26:44.503839] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805515 ] 00:20:10.860 [2024-11-26 19:26:44.557646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:10.860 [2024-11-26 19:26:44.557710] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:10.860 [2024-11-26 19:26:44.557715] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:10.860 [2024-11-26 19:26:44.557736] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:10.860 [2024-11-26 19:26:44.557747] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:10.860 [2024-11-26 19:26:44.561426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:10.860 [2024-11-26 19:26:44.561468] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x988550 0 00:20:10.860 [2024-11-26 19:26:44.569120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:10.860 [2024-11-26 19:26:44.569136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:10.860 [2024-11-26 19:26:44.569141] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:10.860 [2024-11-26 19:26:44.569150] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:10.860 [2024-11-26 19:26:44.569192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.569198] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.569203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.860 [2024-11-26 19:26:44.569217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:10.860 [2024-11-26 19:26:44.569241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.860 [2024-11-26 19:26:44.577112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.860 [2024-11-26 19:26:44.577122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.860 [2024-11-26 19:26:44.577126] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.860 [2024-11-26 19:26:44.577144] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:10.860 [2024-11-26 19:26:44.577152] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:10.860 [2024-11-26 19:26:44.577158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:10.860 [2024-11-26 19:26:44.577174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.860 [2024-11-26 19:26:44.577191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.860 [2024-11-26 19:26:44.577208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.860 [2024-11-26 19:26:44.577294] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.860 [2024-11-26 19:26:44.577301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.860 [2024-11-26 19:26:44.577304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577308] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.860 [2024-11-26 19:26:44.577316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:10.860 [2024-11-26 19:26:44.577324] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:10.860 [2024-11-26 19:26:44.577331] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577339] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.860 [2024-11-26 19:26:44.577346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.860 [2024-11-26 19:26:44.577357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.860 [2024-11-26 19:26:44.577425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.860 [2024-11-26 19:26:44.577431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.860 [2024-11-26 19:26:44.577434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.860 [2024-11-26 19:26:44.577443] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:10.860 [2024-11-26 19:26:44.577453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:10.860 [2024-11-26 19:26:44.577464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577468] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577472] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.860 [2024-11-26 19:26:44.577478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.860 [2024-11-26 19:26:44.577489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.860 [2024-11-26 19:26:44.577565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.860 [2024-11-26 19:26:44.577571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.860 [2024-11-26 19:26:44.577575] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.860 [2024-11-26 19:26:44.577583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:10.860 [2024-11-26 19:26:44.577593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.860 [2024-11-26 19:26:44.577607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.860 [2024-11-26 19:26:44.577618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.860 [2024-11-26 19:26:44.577711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.860 [2024-11-26 19:26:44.577717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.860 [2024-11-26 19:26:44.577721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.860 [2024-11-26 19:26:44.577724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.861 [2024-11-26 19:26:44.577729] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:10.861 [2024-11-26 19:26:44.577734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:10.861 [2024-11-26 19:26:44.577743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:10.861 [2024-11-26 19:26:44.577852] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:10.861 [2024-11-26 19:26:44.577857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:10.861 [2024-11-26 19:26:44.577865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.577869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.577872] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.577879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.861 [2024-11-26 19:26:44.577889] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.861 [2024-11-26 19:26:44.577965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.861 [2024-11-26 19:26:44.577971] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.861 [2024-11-26 19:26:44.577975] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.577979] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.861 [2024-11-26 19:26:44.577983] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:10.861 [2024-11-26 19:26:44.577996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.578010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.861 [2024-11-26 19:26:44.578021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.861 [2024-11-26 19:26:44.578085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.861 [2024-11-26 19:26:44.578091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.861 [2024-11-26 19:26:44.578094] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.861 [2024-11-26 19:26:44.578107] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:10.861 [2024-11-26 19:26:44.578112] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:10.861 [2024-11-26 19:26:44.578120] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:10.861 [2024-11-26 19:26:44.578131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:10.861 [2024-11-26 19:26:44.578140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.578150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.861 [2024-11-26 19:26:44.578162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.861 [2024-11-26 19:26:44.578270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.861 [2024-11-26 19:26:44.578276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.861 [2024-11-26 19:26:44.578280] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578283] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x988550): datao=0, datal=4096, cccid=0 00:20:10.861 [2024-11-26 19:26:44.578288] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ea100) on tqpair(0x988550): expected_datao=0, payload_size=4096 00:20:10.861 [2024-11-26 19:26:44.578293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578313] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578318] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.861 [2024-11-26 19:26:44.578363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.861 [2024-11-26 19:26:44.578366] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.861 [2024-11-26 19:26:44.578378] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:10.861 [2024-11-26 19:26:44.578384] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:10.861 [2024-11-26 19:26:44.578388] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:10.861 [2024-11-26 19:26:44.578393] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:10.861 [2024-11-26 19:26:44.578400] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:10.861 [2024-11-26 19:26:44.578405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:10.861 [2024-11-26 19:26:44.578415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:10.861 [2024-11-26 19:26:44.578422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.578437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.861 [2024-11-26 19:26:44.578448] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.861 [2024-11-26 19:26:44.578520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.861 [2024-11-26 19:26:44.578526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.861 [2024-11-26 19:26:44.578529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.861 [2024-11-26 19:26:44.578541] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578544] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.578554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.861 [2024-11-26 19:26:44.578560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.578573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.861 [2024-11-26 19:26:44.578579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578587] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.578592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.861 [2024-11-26 19:26:44.578598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578602] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578605] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.578611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.861 [2024-11-26 19:26:44.578616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:10.861 [2024-11-26 19:26:44.578633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:10.861 [2024-11-26 19:26:44.578640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.861 [2024-11-26 19:26:44.578644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x988550) 00:20:10.861 [2024-11-26 19:26:44.578651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.861 [2024-11-26 19:26:44.578665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea100, cid 0, qid 0 00:20:10.861 [2024-11-26 19:26:44.578671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea280, cid 1, qid 0 00:20:10.861 [2024-11-26 19:26:44.578675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea400, cid 2, qid 0 00:20:10.861 [2024-11-26 19:26:44.578680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.861 [2024-11-26 19:26:44.578685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea700, cid 4, qid 0 00:20:10.862 [2024-11-26 19:26:44.578811] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.862 [2024-11-26 19:26:44.578817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.862 [2024-11-26 19:26:44.578820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.578824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea700) on tqpair=0x988550 00:20:10.862 [2024-11-26 19:26:44.578829] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:10.862 [2024-11-26 19:26:44.578835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.578847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.578854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.578861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.578864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.578868] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x988550) 00:20:10.862 [2024-11-26 19:26:44.578875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:10.862 [2024-11-26 19:26:44.578885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea700, cid 4, qid 0 00:20:10.862 [2024-11-26 19:26:44.578956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.862 [2024-11-26 19:26:44.578962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.862 [2024-11-26 19:26:44.578965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.578969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea700) on tqpair=0x988550 00:20:10.862 [2024-11-26 19:26:44.579036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.579046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.579054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x988550) 00:20:10.862 [2024-11-26 19:26:44.579064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.862 [2024-11-26 19:26:44.579074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea700, cid 4, qid 0 00:20:10.862 [2024-11-26 19:26:44.579164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.862 [2024-11-26 19:26:44.579171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.862 [2024-11-26 19:26:44.579175] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579179] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x988550): datao=0, datal=4096, cccid=4 00:20:10.862 [2024-11-26 19:26:44.579183] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ea700) on tqpair(0x988550): expected_datao=0, payload_size=4096 00:20:10.862 [2024-11-26 19:26:44.579190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579197] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579201] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.862 [2024-11-26 19:26:44.579409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.862 [2024-11-26 19:26:44.579412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579416] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea700) on tqpair=0x988550 00:20:10.862 [2024-11-26 19:26:44.579429] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:10.862 [2024-11-26 19:26:44.579445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.579456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.579462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x988550) 00:20:10.862 [2024-11-26 19:26:44.579473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.862 [2024-11-26 19:26:44.579484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea700, cid 4, qid 0 00:20:10.862 [2024-11-26 19:26:44.579588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.862 [2024-11-26 19:26:44.579594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.862 [2024-11-26 19:26:44.579598] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579601] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x988550): datao=0, datal=4096, cccid=4 00:20:10.862 [2024-11-26 19:26:44.579606] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ea700) on tqpair(0x988550): expected_datao=0, payload_size=4096 00:20:10.862 [2024-11-26 19:26:44.579610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579616] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579620] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.862 [2024-11-26 19:26:44.579701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.862 [2024-11-26 19:26:44.579705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea700) on tqpair=0x988550 00:20:10.862 [2024-11-26 19:26:44.579721] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.579731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:10.862 [2024-11-26 19:26:44.579737] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x988550) 00:20:10.862 [2024-11-26 19:26:44.579748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.862 [2024-11-26 19:26:44.579758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea700, cid 4, qid 0 00:20:10.862 [2024-11-26 19:26:44.579846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.862 [2024-11-26 19:26:44.579853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.862 [2024-11-26 19:26:44.579859] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579863] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x988550): datao=0, datal=4096, cccid=4 00:20:10.862 [2024-11-26 19:26:44.579867] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ea700) on tqpair(0x988550): expected_datao=0, payload_size=4096 00:20:10.862 [2024-11-26 19:26:44.579871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579878] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579881] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.862 [2024-11-26 19:26:44.579966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.862 [2024-11-26 19:26:44.579969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.862 [2024-11-26 19:26:44.579973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea700) on tqpair=0x988550 00:20:10.863 [2024-11-26 19:26:44.579984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:10.863 [2024-11-26 19:26:44.579993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:10.863 [2024-11-26 19:26:44.580002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:10.863 [2024-11-26 19:26:44.580008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:10.863 [2024-11-26 19:26:44.580014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:10.863 [2024-11-26 19:26:44.580019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:10.863 [2024-11-26 19:26:44.580025] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:10.863 [2024-11-26 19:26:44.580030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:10.863 [2024-11-26 19:26:44.580035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:10.863 [2024-11-26 19:26:44.580053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580063] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.863 [2024-11-26 19:26:44.580070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:10.863 [2024-11-26 19:26:44.580097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea700, cid 4, qid 0 00:20:10.863 [2024-11-26 19:26:44.580107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea880, cid 5, qid 0 00:20:10.863 [2024-11-26 19:26:44.580195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.863 [2024-11-26 19:26:44.580201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.863 [2024-11-26 19:26:44.580204] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea700) on tqpair=0x988550 00:20:10.863 [2024-11-26 19:26:44.580218] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.863 [2024-11-26 19:26:44.580223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.863 [2024-11-26 19:26:44.580227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea880) on tqpair=0x988550 00:20:10.863 [2024-11-26 19:26:44.580240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.863 [2024-11-26 19:26:44.580261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea880, cid 5, qid 0 00:20:10.863 [2024-11-26 19:26:44.580347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.863 [2024-11-26 19:26:44.580353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.863 [2024-11-26 19:26:44.580356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580360] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea880) on tqpair=0x988550 00:20:10.863 [2024-11-26 19:26:44.580370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.863 [2024-11-26 19:26:44.580390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea880, cid 5, qid 0 00:20:10.863 [2024-11-26 19:26:44.580454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.863 [2024-11-26 19:26:44.580460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.863 [2024-11-26 19:26:44.580463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea880) on tqpair=0x988550 00:20:10.863 [2024-11-26 19:26:44.580476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.863 [2024-11-26 19:26:44.580496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea880, cid 5, qid 0 00:20:10.863 [2024-11-26 19:26:44.580564] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.863 [2024-11-26 19:26:44.580570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.863 [2024-11-26 19:26:44.580574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea880) on tqpair=0x988550 00:20:10.863 [2024-11-26 19:26:44.580592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.863 [2024-11-26 19:26:44.580610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580613] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.863 [2024-11-26 19:26:44.580627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580635] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.863 [2024-11-26 19:26:44.580649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x988550) 00:20:10.863 [2024-11-26 19:26:44.580659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.863 [2024-11-26 19:26:44.580670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea880, cid 5, qid 0 00:20:10.863 [2024-11-26 19:26:44.580675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea700, cid 4, qid 0 00:20:10.863 [2024-11-26 19:26:44.580680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eaa00, cid 6, qid 0 00:20:10.863 [2024-11-26 19:26:44.580685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eab80, cid 7, qid 0 00:20:10.863 [2024-11-26 19:26:44.580864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.863 [2024-11-26 19:26:44.580871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.863 [2024-11-26 19:26:44.580874] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580878] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x988550): datao=0, datal=8192, cccid=5 00:20:10.863 [2024-11-26 19:26:44.580882] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ea880) on tqpair(0x988550): expected_datao=0, payload_size=8192 00:20:10.863 [2024-11-26 19:26:44.580887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580964] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580968] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.863 [2024-11-26 19:26:44.580979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.863 [2024-11-26 19:26:44.580983] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.580986] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x988550): datao=0, datal=512, cccid=4 00:20:10.863 [2024-11-26 19:26:44.580991] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9ea700) on tqpair(0x988550): expected_datao=0, payload_size=512 00:20:10.863 [2024-11-26 19:26:44.580995] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.581001] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.581005] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.581011] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.863 [2024-11-26 19:26:44.581016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.863 [2024-11-26 19:26:44.581020] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.581023] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x988550): datao=0, datal=512, cccid=6 00:20:10.863 [2024-11-26 19:26:44.581027] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9eaa00) on tqpair(0x988550): expected_datao=0, payload_size=512 00:20:10.863 [2024-11-26 19:26:44.581032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.581038] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.863 [2024-11-26 19:26:44.581042] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.581047] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:10.864 [2024-11-26 19:26:44.581053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:10.864 [2024-11-26 19:26:44.581056] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.581062] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x988550): datao=0, datal=4096, cccid=7 00:20:10.864 [2024-11-26 19:26:44.581067] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9eab80) on tqpair(0x988550): expected_datao=0, payload_size=4096 00:20:10.864 [2024-11-26 19:26:44.581071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.581084] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.581087] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.581095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.864 [2024-11-26 19:26:44.585107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.864 [2024-11-26 19:26:44.585114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.585118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea880) on tqpair=0x988550 00:20:10.864 [2024-11-26 19:26:44.585133] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.864 [2024-11-26 19:26:44.585139] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.864 [2024-11-26 19:26:44.585143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.585147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea700) on tqpair=0x988550 00:20:10.864 [2024-11-26 19:26:44.585158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.864 [2024-11-26 19:26:44.585164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.864 [2024-11-26 19:26:44.585168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.585172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eaa00) on tqpair=0x988550 00:20:10.864 [2024-11-26 19:26:44.585179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.864 [2024-11-26 19:26:44.585185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.864 [2024-11-26 19:26:44.585188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.864 [2024-11-26 19:26:44.585192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eab80) on tqpair=0x988550 00:20:10.864 ===================================================== 00:20:10.864 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:10.864 ===================================================== 00:20:10.864 Controller Capabilities/Features 00:20:10.864 ================================ 00:20:10.864 Vendor ID: 8086 00:20:10.864 Subsystem Vendor ID: 8086 00:20:10.864 Serial Number: SPDK00000000000001 00:20:10.864 Model Number: SPDK bdev Controller 00:20:10.864 Firmware Version: 25.01 00:20:10.864 Recommended Arb Burst: 6 00:20:10.864 IEEE OUI Identifier: e4 d2 5c 00:20:10.864 Multi-path I/O 00:20:10.864 May have multiple subsystem ports: Yes 00:20:10.864 May have multiple controllers: Yes 00:20:10.864 Associated with SR-IOV VF: No 00:20:10.864 Max Data Transfer Size: 131072 00:20:10.864 Max Number of Namespaces: 32 00:20:10.864 Max Number of I/O Queues: 127 00:20:10.864 NVMe Specification Version (VS): 1.3 00:20:10.864 NVMe Specification Version (Identify): 1.3 00:20:10.864 Maximum Queue Entries: 128 00:20:10.864 Contiguous Queues Required: Yes 00:20:10.864 Arbitration Mechanisms Supported 00:20:10.864 Weighted Round Robin: Not Supported 00:20:10.864 Vendor Specific: Not Supported 00:20:10.864 Reset Timeout: 15000 ms 00:20:10.864 Doorbell Stride: 4 bytes 00:20:10.864 NVM Subsystem Reset: Not Supported 00:20:10.864 Command Sets Supported 00:20:10.864 NVM Command Set: Supported 00:20:10.864 Boot Partition: Not Supported 00:20:10.864 Memory Page Size Minimum: 4096 bytes 00:20:10.864 Memory Page Size Maximum: 4096 bytes 00:20:10.864 Persistent Memory Region: Not Supported 00:20:10.864 Optional Asynchronous Events Supported 00:20:10.864 Namespace Attribute Notices: Supported 00:20:10.864 Firmware Activation Notices: Not Supported 00:20:10.864 ANA Change Notices: Not Supported 00:20:10.864 PLE Aggregate Log Change Notices: Not Supported 00:20:10.864 LBA Status Info Alert Notices: Not Supported 00:20:10.864 EGE Aggregate Log Change Notices: Not Supported 00:20:10.864 Normal NVM Subsystem Shutdown event: Not Supported 00:20:10.864 Zone Descriptor Change Notices: Not Supported 00:20:10.864 Discovery Log Change Notices: Not Supported 00:20:10.864 Controller Attributes 00:20:10.864 128-bit Host Identifier: Supported 00:20:10.864 Non-Operational Permissive Mode: Not Supported 00:20:10.864 NVM Sets: Not Supported 00:20:10.864 Read Recovery Levels: Not Supported 00:20:10.864 Endurance Groups: Not Supported 00:20:10.864 Predictable Latency Mode: Not Supported 00:20:10.864 Traffic Based Keep ALive: Not Supported 00:20:10.864 Namespace Granularity: Not Supported 00:20:10.864 SQ Associations: Not Supported 00:20:10.864 UUID List: Not Supported 00:20:10.864 Multi-Domain Subsystem: Not Supported 00:20:10.864 Fixed Capacity Management: Not Supported 00:20:10.864 Variable Capacity Management: Not Supported 00:20:10.864 Delete Endurance Group: Not Supported 00:20:10.864 Delete NVM Set: Not Supported 00:20:10.864 Extended LBA Formats Supported: Not Supported 00:20:10.864 Flexible Data Placement Supported: Not Supported 00:20:10.864 00:20:10.864 Controller Memory Buffer Support 00:20:10.864 ================================ 00:20:10.864 Supported: No 00:20:10.864 00:20:10.864 Persistent Memory Region Support 00:20:10.864 ================================ 00:20:10.864 Supported: No 00:20:10.864 00:20:10.864 Admin Command Set Attributes 00:20:10.864 ============================ 00:20:10.864 Security Send/Receive: Not Supported 00:20:10.864 Format NVM: Not Supported 00:20:10.864 Firmware Activate/Download: Not Supported 00:20:10.864 Namespace Management: Not Supported 00:20:10.864 Device Self-Test: Not Supported 00:20:10.864 Directives: Not Supported 00:20:10.864 NVMe-MI: Not Supported 00:20:10.864 Virtualization Management: Not Supported 00:20:10.864 Doorbell Buffer Config: Not Supported 00:20:10.864 Get LBA Status Capability: Not Supported 00:20:10.864 Command & Feature Lockdown Capability: Not Supported 00:20:10.864 Abort Command Limit: 4 00:20:10.864 Async Event Request Limit: 4 00:20:10.864 Number of Firmware Slots: N/A 00:20:10.864 Firmware Slot 1 Read-Only: N/A 00:20:10.864 Firmware Activation Without Reset: N/A 00:20:10.864 Multiple Update Detection Support: N/A 00:20:10.864 Firmware Update Granularity: No Information Provided 00:20:10.864 Per-Namespace SMART Log: No 00:20:10.864 Asymmetric Namespace Access Log Page: Not Supported 00:20:10.864 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:10.864 Command Effects Log Page: Supported 00:20:10.864 Get Log Page Extended Data: Supported 00:20:10.864 Telemetry Log Pages: Not Supported 00:20:10.864 Persistent Event Log Pages: Not Supported 00:20:10.864 Supported Log Pages Log Page: May Support 00:20:10.864 Commands Supported & Effects Log Page: Not Supported 00:20:10.864 Feature Identifiers & Effects Log Page:May Support 00:20:10.864 NVMe-MI Commands & Effects Log Page: May Support 00:20:10.864 Data Area 4 for Telemetry Log: Not Supported 00:20:10.864 Error Log Page Entries Supported: 128 00:20:10.864 Keep Alive: Supported 00:20:10.864 Keep Alive Granularity: 10000 ms 00:20:10.864 00:20:10.864 NVM Command Set Attributes 00:20:10.864 ========================== 00:20:10.864 Submission Queue Entry Size 00:20:10.864 Max: 64 00:20:10.864 Min: 64 00:20:10.864 Completion Queue Entry Size 00:20:10.864 Max: 16 00:20:10.864 Min: 16 00:20:10.864 Number of Namespaces: 32 00:20:10.864 Compare Command: Supported 00:20:10.864 Write Uncorrectable Command: Not Supported 00:20:10.864 Dataset Management Command: Supported 00:20:10.864 Write Zeroes Command: Supported 00:20:10.864 Set Features Save Field: Not Supported 00:20:10.864 Reservations: Supported 00:20:10.864 Timestamp: Not Supported 00:20:10.864 Copy: Supported 00:20:10.864 Volatile Write Cache: Present 00:20:10.864 Atomic Write Unit (Normal): 1 00:20:10.864 Atomic Write Unit (PFail): 1 00:20:10.864 Atomic Compare & Write Unit: 1 00:20:10.864 Fused Compare & Write: Supported 00:20:10.864 Scatter-Gather List 00:20:10.865 SGL Command Set: Supported 00:20:10.865 SGL Keyed: Supported 00:20:10.865 SGL Bit Bucket Descriptor: Not Supported 00:20:10.865 SGL Metadata Pointer: Not Supported 00:20:10.865 Oversized SGL: Not Supported 00:20:10.865 SGL Metadata Address: Not Supported 00:20:10.865 SGL Offset: Supported 00:20:10.865 Transport SGL Data Block: Not Supported 00:20:10.865 Replay Protected Memory Block: Not Supported 00:20:10.865 00:20:10.865 Firmware Slot Information 00:20:10.865 ========================= 00:20:10.865 Active slot: 1 00:20:10.865 Slot 1 Firmware Revision: 25.01 00:20:10.865 00:20:10.865 00:20:10.865 Commands Supported and Effects 00:20:10.865 ============================== 00:20:10.865 Admin Commands 00:20:10.865 -------------- 00:20:10.865 Get Log Page (02h): Supported 00:20:10.865 Identify (06h): Supported 00:20:10.865 Abort (08h): Supported 00:20:10.865 Set Features (09h): Supported 00:20:10.865 Get Features (0Ah): Supported 00:20:10.865 Asynchronous Event Request (0Ch): Supported 00:20:10.865 Keep Alive (18h): Supported 00:20:10.865 I/O Commands 00:20:10.865 ------------ 00:20:10.865 Flush (00h): Supported LBA-Change 00:20:10.865 Write (01h): Supported LBA-Change 00:20:10.865 Read (02h): Supported 00:20:10.865 Compare (05h): Supported 00:20:10.865 Write Zeroes (08h): Supported LBA-Change 00:20:10.865 Dataset Management (09h): Supported LBA-Change 00:20:10.865 Copy (19h): Supported LBA-Change 00:20:10.865 00:20:10.865 Error Log 00:20:10.865 ========= 00:20:10.865 00:20:10.865 Arbitration 00:20:10.865 =========== 00:20:10.865 Arbitration Burst: 1 00:20:10.865 00:20:10.865 Power Management 00:20:10.865 ================ 00:20:10.865 Number of Power States: 1 00:20:10.865 Current Power State: Power State #0 00:20:10.865 Power State #0: 00:20:10.865 Max Power: 0.00 W 00:20:10.865 Non-Operational State: Operational 00:20:10.865 Entry Latency: Not Reported 00:20:10.865 Exit Latency: Not Reported 00:20:10.865 Relative Read Throughput: 0 00:20:10.865 Relative Read Latency: 0 00:20:10.865 Relative Write Throughput: 0 00:20:10.865 Relative Write Latency: 0 00:20:10.865 Idle Power: Not Reported 00:20:10.865 Active Power: Not Reported 00:20:10.865 Non-Operational Permissive Mode: Not Supported 00:20:10.865 00:20:10.865 Health Information 00:20:10.865 ================== 00:20:10.865 Critical Warnings: 00:20:10.865 Available Spare Space: OK 00:20:10.865 Temperature: OK 00:20:10.865 Device Reliability: OK 00:20:10.865 Read Only: No 00:20:10.865 Volatile Memory Backup: OK 00:20:10.865 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:10.865 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:10.865 Available Spare: 0% 00:20:10.865 Available Spare Threshold: 0% 00:20:10.865 Life Percentage Used:[2024-11-26 19:26:44.585299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585304] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x988550) 00:20:10.865 [2024-11-26 19:26:44.585312] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.865 [2024-11-26 19:26:44.585326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9eab80, cid 7, qid 0 00:20:10.865 [2024-11-26 19:26:44.585416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.865 [2024-11-26 19:26:44.585422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.865 [2024-11-26 19:26:44.585426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9eab80) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.585469] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:10.865 [2024-11-26 19:26:44.585480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea100) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.585487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.865 [2024-11-26 19:26:44.585493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea280) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.585497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.865 [2024-11-26 19:26:44.585503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea400) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.585507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.865 [2024-11-26 19:26:44.585517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.585521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:10.865 [2024-11-26 19:26:44.585531] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.865 [2024-11-26 19:26:44.585546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.865 [2024-11-26 19:26:44.585559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.865 [2024-11-26 19:26:44.585634] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.865 [2024-11-26 19:26:44.585641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.865 [2024-11-26 19:26:44.585644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.585655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.865 [2024-11-26 19:26:44.585669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.865 [2024-11-26 19:26:44.585684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.865 [2024-11-26 19:26:44.585761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.865 [2024-11-26 19:26:44.585768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.865 [2024-11-26 19:26:44.585771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.585780] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:10.865 [2024-11-26 19:26:44.585785] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:10.865 [2024-11-26 19:26:44.585794] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.865 [2024-11-26 19:26:44.585809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.865 [2024-11-26 19:26:44.585819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.865 [2024-11-26 19:26:44.585882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.865 [2024-11-26 19:26:44.585889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.865 [2024-11-26 19:26:44.585892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.585906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.585914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.865 [2024-11-26 19:26:44.585921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.865 [2024-11-26 19:26:44.585932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.865 [2024-11-26 19:26:44.586005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.865 [2024-11-26 19:26:44.586011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.865 [2024-11-26 19:26:44.586015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.586019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.865 [2024-11-26 19:26:44.586029] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.865 [2024-11-26 19:26:44.586033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.586122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.586128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.586132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586136] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.586146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586153] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586171] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.586244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.586251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.586254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.586268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.586355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.586362] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.586365] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586369] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.586379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.586474] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.586483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.586486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.586500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.586585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.586591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.586595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.586609] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.586699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.586705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.586709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.586723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.586806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.586813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.586816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.586830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.586918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.586925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.586930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.586944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.586952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.586958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.586969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.587048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.587055] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.587058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.587062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.587072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.587076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.587080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.587086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.866 [2024-11-26 19:26:44.587097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.866 [2024-11-26 19:26:44.587161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.866 [2024-11-26 19:26:44.587167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.866 [2024-11-26 19:26:44.587171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.587174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.866 [2024-11-26 19:26:44.587184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.587189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.866 [2024-11-26 19:26:44.587192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.866 [2024-11-26 19:26:44.587199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.587209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.587280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.587286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.587289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.587303] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.587317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.587328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.587391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.587398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.587401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.587417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.587431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.587441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.587504] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.587510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.587514] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.587527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587535] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.587542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.587552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.587623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.587629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.587633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.587646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587654] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.587661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.587671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.587741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.587747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.587750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587754] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.587765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.587779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.587789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.587866] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.587873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.587876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.587890] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.587906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.587916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.587986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.587992] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.587995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.587999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.588010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.588024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.588034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.588108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.588114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.588118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.588132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588136] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.588146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.588156] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.588230] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.588237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.588240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588244] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.867 [2024-11-26 19:26:44.588254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588262] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.867 [2024-11-26 19:26:44.588269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.867 [2024-11-26 19:26:44.588279] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.867 [2024-11-26 19:26:44.588345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.867 [2024-11-26 19:26:44.588352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.867 [2024-11-26 19:26:44.588355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.867 [2024-11-26 19:26:44.588359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.868 [2024-11-26 19:26:44.588369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.868 [2024-11-26 19:26:44.588385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.868 [2024-11-26 19:26:44.588396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.868 [2024-11-26 19:26:44.588460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.868 [2024-11-26 19:26:44.588466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.868 [2024-11-26 19:26:44.588469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.868 [2024-11-26 19:26:44.588483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.868 [2024-11-26 19:26:44.588498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.868 [2024-11-26 19:26:44.588508] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.868 [2024-11-26 19:26:44.588567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.868 [2024-11-26 19:26:44.588573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.868 [2024-11-26 19:26:44.588577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588581] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.868 [2024-11-26 19:26:44.588590] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588594] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588598] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.868 [2024-11-26 19:26:44.588604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.868 [2024-11-26 19:26:44.588614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.868 [2024-11-26 19:26:44.588694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.868 [2024-11-26 19:26:44.588700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.868 [2024-11-26 19:26:44.588703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.868 [2024-11-26 19:26:44.588717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588725] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.868 [2024-11-26 19:26:44.588732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.868 [2024-11-26 19:26:44.588742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.868 [2024-11-26 19:26:44.588822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.868 [2024-11-26 19:26:44.588828] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.868 [2024-11-26 19:26:44.588831] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588835] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.868 [2024-11-26 19:26:44.588846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.868 [2024-11-26 19:26:44.588862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.868 [2024-11-26 19:26:44.588872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.868 [2024-11-26 19:26:44.588943] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.868 [2024-11-26 19:26:44.588950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.868 [2024-11-26 19:26:44.588953] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.868 [2024-11-26 19:26:44.588967] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.588974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.868 [2024-11-26 19:26:44.588981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.868 [2024-11-26 19:26:44.588991] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.868 [2024-11-26 19:26:44.589064] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.868 [2024-11-26 19:26:44.589070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.868 [2024-11-26 19:26:44.589074] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.589077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.868 [2024-11-26 19:26:44.589088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.589092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.589095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x988550) 00:20:10.868 [2024-11-26 19:26:44.593108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:10.868 [2024-11-26 19:26:44.593122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9ea580, cid 3, qid 0 00:20:10.868 [2024-11-26 19:26:44.593197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:10.868 [2024-11-26 19:26:44.593204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:10.868 [2024-11-26 19:26:44.593207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:10.868 [2024-11-26 19:26:44.593211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9ea580) on tqpair=0x988550 00:20:10.868 [2024-11-26 19:26:44.593220] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:20:10.868 0% 00:20:10.868 Data Units Read: 0 00:20:10.868 Data Units Written: 0 00:20:10.868 Host Read Commands: 0 00:20:10.868 Host Write Commands: 0 00:20:10.868 Controller Busy Time: 0 minutes 00:20:10.868 Power Cycles: 0 00:20:10.868 Power On Hours: 0 hours 00:20:10.868 Unsafe Shutdowns: 0 00:20:10.868 Unrecoverable Media Errors: 0 00:20:10.868 Lifetime Error Log Entries: 0 00:20:10.868 Warning Temperature Time: 0 minutes 00:20:10.868 Critical Temperature Time: 0 minutes 00:20:10.868 00:20:10.868 Number of Queues 00:20:10.868 ================ 00:20:10.868 Number of I/O Submission Queues: 127 00:20:10.868 Number of I/O Completion Queues: 127 00:20:10.868 00:20:10.868 Active Namespaces 00:20:10.868 ================= 00:20:10.868 Namespace ID:1 00:20:10.869 Error Recovery Timeout: Unlimited 00:20:10.869 Command Set Identifier: NVM (00h) 00:20:10.869 Deallocate: Supported 00:20:10.869 Deallocated/Unwritten Error: Not Supported 00:20:10.869 Deallocated Read Value: Unknown 00:20:10.869 Deallocate in Write Zeroes: Not Supported 00:20:10.869 Deallocated Guard Field: 0xFFFF 00:20:10.869 Flush: Supported 00:20:10.869 Reservation: Supported 00:20:10.869 Namespace Sharing Capabilities: Multiple Controllers 00:20:10.869 Size (in LBAs): 131072 (0GiB) 00:20:10.869 Capacity (in LBAs): 131072 (0GiB) 00:20:10.869 Utilization (in LBAs): 131072 (0GiB) 00:20:10.869 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:10.869 EUI64: ABCDEF0123456789 00:20:10.869 UUID: c547a7a8-222c-4e17-ba3b-780ab01f95c3 00:20:10.869 Thin Provisioning: Not Supported 00:20:10.869 Per-NS Atomic Units: Yes 00:20:10.869 Atomic Boundary Size (Normal): 0 00:20:10.869 Atomic Boundary Size (PFail): 0 00:20:10.869 Atomic Boundary Offset: 0 00:20:10.869 Maximum Single Source Range Length: 65535 00:20:10.869 Maximum Copy Length: 65535 00:20:10.869 Maximum Source Range Count: 1 00:20:10.869 NGUID/EUI64 Never Reused: No 00:20:10.869 Namespace Write Protected: No 00:20:10.869 Number of LBA Formats: 1 00:20:10.869 Current LBA Format: LBA Format #00 00:20:10.869 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:10.869 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:10.869 rmmod nvme_tcp 00:20:10.869 rmmod nvme_fabrics 00:20:10.869 rmmod nvme_keyring 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3805166 ']' 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3805166 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3805166 ']' 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3805166 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:10.869 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3805166 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3805166' 00:20:11.129 killing process with pid 3805166 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3805166 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3805166 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:11.129 19:26:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.665 19:26:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:13.665 00:20:13.665 real 0m9.284s 00:20:13.665 user 0m7.257s 00:20:13.665 sys 0m4.480s 00:20:13.665 19:26:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.665 19:26:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:13.665 ************************************ 00:20:13.665 END TEST nvmf_identify 00:20:13.665 ************************************ 00:20:13.665 19:26:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.665 ************************************ 00:20:13.665 START TEST nvmf_perf 00:20:13.665 ************************************ 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:13.665 * Looking for test storage... 00:20:13.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:13.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.665 --rc genhtml_branch_coverage=1 00:20:13.665 --rc genhtml_function_coverage=1 00:20:13.665 --rc genhtml_legend=1 00:20:13.665 --rc geninfo_all_blocks=1 00:20:13.665 --rc geninfo_unexecuted_blocks=1 00:20:13.665 00:20:13.665 ' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:13.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.665 --rc genhtml_branch_coverage=1 00:20:13.665 --rc genhtml_function_coverage=1 00:20:13.665 --rc genhtml_legend=1 00:20:13.665 --rc geninfo_all_blocks=1 00:20:13.665 --rc geninfo_unexecuted_blocks=1 00:20:13.665 00:20:13.665 ' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:13.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.665 --rc genhtml_branch_coverage=1 00:20:13.665 --rc genhtml_function_coverage=1 00:20:13.665 --rc genhtml_legend=1 00:20:13.665 --rc geninfo_all_blocks=1 00:20:13.665 --rc geninfo_unexecuted_blocks=1 00:20:13.665 00:20:13.665 ' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:13.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.665 --rc genhtml_branch_coverage=1 00:20:13.665 --rc genhtml_function_coverage=1 00:20:13.665 --rc genhtml_legend=1 00:20:13.665 --rc geninfo_all_blocks=1 00:20:13.665 --rc geninfo_unexecuted_blocks=1 00:20:13.665 00:20:13.665 ' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.665 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.666 19:26:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:18.935 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:18.935 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:18.935 Found net devices under 0000:31:00.0: cvl_0_0 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:18.935 Found net devices under 0000:31:00.1: cvl_0_1 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:18.935 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.936 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:19.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.631 ms 00:20:19.196 00:20:19.196 --- 10.0.0.2 ping statistics --- 00:20:19.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.196 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:20:19.196 00:20:19.196 --- 10.0.0.1 ping statistics --- 00:20:19.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.196 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3809860 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3809860 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3809860 ']' 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:19.196 19:26:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:19.197 [2024-11-26 19:26:52.886295] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:20:19.197 [2024-11-26 19:26:52.886360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.197 [2024-11-26 19:26:52.983576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.197 [2024-11-26 19:26:53.037080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.197 [2024-11-26 19:26:53.037152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.197 [2024-11-26 19:26:53.037161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.197 [2024-11-26 19:26:53.037168] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.197 [2024-11-26 19:26:53.037174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.197 [2024-11-26 19:26:53.043129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.197 [2024-11-26 19:26:53.043226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.197 [2024-11-26 19:26:53.043388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.197 [2024-11-26 19:26:53.043392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.135 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.135 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:20:20.135 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.135 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.135 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:20.135 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.135 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:20.135 19:26:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:20.393 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:20.393 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:20.653 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:20:20.653 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:20.912 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:20.912 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:20:20.912 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:20.912 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:20.912 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:20.912 [2024-11-26 19:26:54.673438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.912 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:21.171 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:21.171 19:26:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:21.171 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:21.171 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:21.430 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.690 [2024-11-26 19:26:55.316738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.690 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:21.690 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:20:21.690 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:20:21.690 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:21.690 19:26:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:20:23.068 Initializing NVMe Controllers 00:20:23.068 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:20:23.068 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:20:23.068 Initialization complete. Launching workers. 00:20:23.068 ======================================================== 00:20:23.068 Latency(us) 00:20:23.068 Device Information : IOPS MiB/s Average min max 00:20:23.068 PCIE (0000:65:00.0) NSID 1 from core 0: 108573.75 424.12 294.13 31.66 4359.60 00:20:23.068 ======================================================== 00:20:23.068 Total : 108573.75 424.12 294.13 31.66 4359.60 00:20:23.068 00:20:23.068 19:26:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:24.446 Initializing NVMe Controllers 00:20:24.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:24.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:24.446 Initialization complete. Launching workers. 00:20:24.446 ======================================================== 00:20:24.446 Latency(us) 00:20:24.446 Device Information : IOPS MiB/s Average min max 00:20:24.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.71 0.32 12406.12 197.29 44968.69 00:20:24.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 64.76 0.25 16054.11 6974.15 53874.01 00:20:24.446 ======================================================== 00:20:24.446 Total : 145.47 0.57 14030.22 197.29 53874.01 00:20:24.446 00:20:24.446 19:26:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:25.822 Initializing NVMe Controllers 00:20:25.822 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:25.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:25.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:25.823 Initialization complete. Launching workers. 00:20:25.823 ======================================================== 00:20:25.823 Latency(us) 00:20:25.823 Device Information : IOPS MiB/s Average min max 00:20:25.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11935.00 46.62 2687.08 431.99 9077.95 00:20:25.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3853.00 15.05 8342.89 4561.68 15812.62 00:20:25.823 ======================================================== 00:20:25.823 Total : 15788.00 61.67 4067.36 431.99 15812.62 00:20:25.823 00:20:25.823 19:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:25.823 19:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:25.823 19:26:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.364 Initializing NVMe Controllers 00:20:28.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.364 Controller IO queue size 128, less than required. 00:20:28.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.364 Controller IO queue size 128, less than required. 00:20:28.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:28.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:28.364 Initialization complete. Launching workers. 00:20:28.364 ======================================================== 00:20:28.364 Latency(us) 00:20:28.364 Device Information : IOPS MiB/s Average min max 00:20:28.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1776.53 444.13 73623.56 35734.11 121403.70 00:20:28.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 590.35 147.59 228836.47 56148.80 370485.88 00:20:28.364 ======================================================== 00:20:28.364 Total : 2366.88 591.72 112336.64 35734.11 370485.88 00:20:28.364 00:20:28.364 19:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:28.624 No valid NVMe controllers or AIO or URING devices found 00:20:28.624 Initializing NVMe Controllers 00:20:28.624 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:28.624 Controller IO queue size 128, less than required. 00:20:28.624 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.624 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:28.624 Controller IO queue size 128, less than required. 00:20:28.624 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:28.624 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:28.624 WARNING: Some requested NVMe devices were skipped 00:20:28.624 19:27:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:31.161 Initializing NVMe Controllers 00:20:31.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.161 Controller IO queue size 128, less than required. 00:20:31.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.161 Controller IO queue size 128, less than required. 00:20:31.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:31.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.161 Initialization complete. Launching workers. 00:20:31.161 00:20:31.161 ==================== 00:20:31.161 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:31.161 TCP transport: 00:20:31.161 polls: 42164 00:20:31.161 idle_polls: 26644 00:20:31.161 sock_completions: 15520 00:20:31.161 nvme_completions: 7041 00:20:31.161 submitted_requests: 10640 00:20:31.161 queued_requests: 1 00:20:31.161 00:20:31.161 ==================== 00:20:31.161 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:31.161 TCP transport: 00:20:31.161 polls: 45496 00:20:31.161 idle_polls: 28147 00:20:31.161 sock_completions: 17349 00:20:31.161 nvme_completions: 6895 00:20:31.161 submitted_requests: 10274 00:20:31.161 queued_requests: 1 00:20:31.161 ======================================================== 00:20:31.161 Latency(us) 00:20:31.161 Device Information : IOPS MiB/s Average min max 00:20:31.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1759.98 439.99 73714.34 37591.82 120846.29 00:20:31.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1723.48 430.87 75084.90 27929.63 117485.01 00:20:31.161 ======================================================== 00:20:31.161 Total : 3483.46 870.86 74392.44 27929.63 120846.29 00:20:31.161 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:31.161 19:27:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:31.161 rmmod nvme_tcp 00:20:31.161 rmmod nvme_fabrics 00:20:31.161 rmmod nvme_keyring 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3809860 ']' 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3809860 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3809860 ']' 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3809860 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3809860 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3809860' 00:20:31.420 killing process with pid 3809860 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3809860 00:20:31.420 19:27:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3809860 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.352 19:27:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.258 19:27:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.258 00:20:35.258 real 0m22.051s 00:20:35.258 user 0m56.453s 00:20:35.258 sys 0m6.963s 00:20:35.258 19:27:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.258 19:27:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:35.258 ************************************ 00:20:35.258 END TEST nvmf_perf 00:20:35.258 ************************************ 00:20:35.258 19:27:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:35.258 19:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:35.258 19:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.258 19:27:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:35.522 ************************************ 00:20:35.522 START TEST nvmf_fio_host 00:20:35.522 ************************************ 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:35.522 * Looking for test storage... 00:20:35.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:35.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.522 --rc genhtml_branch_coverage=1 00:20:35.522 --rc genhtml_function_coverage=1 00:20:35.522 --rc genhtml_legend=1 00:20:35.522 --rc geninfo_all_blocks=1 00:20:35.522 --rc geninfo_unexecuted_blocks=1 00:20:35.522 00:20:35.522 ' 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:35.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.522 --rc genhtml_branch_coverage=1 00:20:35.522 --rc genhtml_function_coverage=1 00:20:35.522 --rc genhtml_legend=1 00:20:35.522 --rc geninfo_all_blocks=1 00:20:35.522 --rc geninfo_unexecuted_blocks=1 00:20:35.522 00:20:35.522 ' 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:35.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.522 --rc genhtml_branch_coverage=1 00:20:35.522 --rc genhtml_function_coverage=1 00:20:35.522 --rc genhtml_legend=1 00:20:35.522 --rc geninfo_all_blocks=1 00:20:35.522 --rc geninfo_unexecuted_blocks=1 00:20:35.522 00:20:35.522 ' 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:35.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.522 --rc genhtml_branch_coverage=1 00:20:35.522 --rc genhtml_function_coverage=1 00:20:35.522 --rc genhtml_legend=1 00:20:35.522 --rc geninfo_all_blocks=1 00:20:35.522 --rc geninfo_unexecuted_blocks=1 00:20:35.522 00:20:35.522 ' 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.522 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.523 19:27:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:40.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:40.978 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:40.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:40.979 Found net devices under 0000:31:00.0: cvl_0_0 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:40.979 Found net devices under 0000:31:00.1: cvl_0_1 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:40.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:20:40.979 00:20:40.979 --- 10.0.0.2 ping statistics --- 00:20:40.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.979 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:20:40.979 00:20:40.979 --- 10.0.0.1 ping statistics --- 00:20:40.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.979 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3817829 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3817829 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3817829 ']' 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.979 19:27:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.979 [2024-11-26 19:27:14.650534] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:20:40.979 [2024-11-26 19:27:14.650584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.979 [2024-11-26 19:27:14.722644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.979 [2024-11-26 19:27:14.752473] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.979 [2024-11-26 19:27:14.752503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.979 [2024-11-26 19:27:14.752509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.979 [2024-11-26 19:27:14.752514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.979 [2024-11-26 19:27:14.752518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.979 [2024-11-26 19:27:14.754058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.979 [2024-11-26 19:27:14.754212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.979 [2024-11-26 19:27:14.754251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.979 [2024-11-26 19:27:14.754253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.914 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.914 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:20:41.914 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:41.914 [2024-11-26 19:27:15.573215] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.914 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:41.914 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.914 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.914 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:42.172 Malloc1 00:20:42.172 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:42.172 19:27:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:42.430 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:42.430 [2024-11-26 19:27:16.254125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.430 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:42.689 19:27:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:42.947 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:42.947 fio-3.35 00:20:42.947 Starting 1 thread 00:20:45.477 00:20:45.477 test: (groupid=0, jobs=1): err= 0: pid=3818649: Tue Nov 26 19:27:19 2024 00:20:45.477 read: IOPS=13.8k, BW=53.8MiB/s (56.4MB/s)(108MiB/2005msec) 00:20:45.477 slat (nsec): min=1409, max=74016, avg=1449.17, stdev=621.07 00:20:45.477 clat (usec): min=1580, max=8721, avg=5141.80, stdev=348.99 00:20:45.477 lat (usec): min=1592, max=8723, avg=5143.25, stdev=348.95 00:20:45.477 clat percentiles (usec): 00:20:45.477 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4752], 20.00th=[ 4883], 00:20:45.477 | 30.00th=[ 4948], 40.00th=[ 5080], 50.00th=[ 5145], 60.00th=[ 5211], 00:20:45.477 | 70.00th=[ 5342], 80.00th=[ 5407], 90.00th=[ 5538], 95.00th=[ 5669], 00:20:45.477 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 6915], 99.95th=[ 7439], 00:20:45.477 | 99.99th=[ 8717] 00:20:45.477 bw ( KiB/s): min=53880, max=55600, per=100.00%, avg=55090.00, stdev=810.29, samples=4 00:20:45.477 iops : min=13470, max=13900, avg=13772.50, stdev=202.57, samples=4 00:20:45.477 write: IOPS=13.8k, BW=53.7MiB/s (56.3MB/s)(108MiB/2005msec); 0 zone resets 00:20:45.477 slat (nsec): min=1440, max=67959, avg=1504.19, stdev=438.04 00:20:45.477 clat (usec): min=738, max=7938, avg=4102.31, stdev=306.85 00:20:45.477 lat (usec): min=743, max=7939, avg=4103.81, stdev=306.83 00:20:45.477 clat percentiles (usec): 00:20:45.477 | 1.00th=[ 3425], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:20:45.477 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:20:45.477 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4555], 00:20:45.477 | 99.00th=[ 4752], 99.50th=[ 4817], 99.90th=[ 6849], 99.95th=[ 7373], 00:20:45.477 | 99.99th=[ 7898] 00:20:45.477 bw ( KiB/s): min=54224, max=55408, per=100.00%, avg=55028.00, stdev=550.61, samples=4 00:20:45.477 iops : min=13556, max=13852, avg=13757.00, stdev=137.65, samples=4 00:20:45.478 lat (usec) : 750=0.01%, 1000=0.01% 00:20:45.478 lat (msec) : 2=0.04%, 4=17.73%, 10=82.23% 00:20:45.478 cpu : usr=71.76%, sys=27.25%, ctx=41, majf=0, minf=16 00:20:45.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:45.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:45.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:45.478 issued rwts: total=27610,27576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:45.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:45.478 00:20:45.478 Run status group 0 (all jobs): 00:20:45.478 READ: bw=53.8MiB/s (56.4MB/s), 53.8MiB/s-53.8MiB/s (56.4MB/s-56.4MB/s), io=108MiB (113MB), run=2005-2005msec 00:20:45.478 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=108MiB (113MB), run=2005-2005msec 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:45.478 19:27:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:46.045 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:46.045 fio-3.35 00:20:46.045 Starting 1 thread 00:20:46.979 [2024-11-26 19:27:20.510825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b8f90 is same with the state(6) to be set 00:20:48.356 00:20:48.356 test: (groupid=0, jobs=1): err= 0: pid=3819334: Tue Nov 26 19:27:22 2024 00:20:48.356 read: IOPS=12.3k, BW=193MiB/s (202MB/s)(387MiB/2004msec) 00:20:48.356 slat (nsec): min=2328, max=76360, avg=2458.14, stdev=1007.53 00:20:48.357 clat (usec): min=2020, max=13020, avg=6345.66, stdev=1696.85 00:20:48.357 lat (usec): min=2022, max=13022, avg=6348.12, stdev=1696.92 00:20:48.357 clat percentiles (usec): 00:20:48.357 | 1.00th=[ 3294], 5.00th=[ 3884], 10.00th=[ 4293], 20.00th=[ 4817], 00:20:48.357 | 30.00th=[ 5211], 40.00th=[ 5669], 50.00th=[ 6128], 60.00th=[ 6718], 00:20:48.357 | 70.00th=[ 7308], 80.00th=[ 7963], 90.00th=[ 8455], 95.00th=[ 9110], 00:20:48.357 | 99.00th=[10945], 99.50th=[11338], 99.90th=[11994], 99.95th=[12387], 00:20:48.357 | 99.99th=[12911] 00:20:48.357 bw ( KiB/s): min=95872, max=99072, per=49.22%, avg=97224.00, stdev=1348.15, samples=4 00:20:48.357 iops : min= 5992, max= 6192, avg=6076.50, stdev=84.26, samples=4 00:20:48.357 write: IOPS=7361, BW=115MiB/s (121MB/s)(199MiB/1728msec); 0 zone resets 00:20:48.357 slat (usec): min=27, max=136, avg=27.83, stdev= 2.35 00:20:48.357 clat (usec): min=2480, max=13076, avg=7079.40, stdev=1132.27 00:20:48.357 lat (usec): min=2514, max=13103, avg=7107.23, stdev=1132.30 00:20:48.357 clat percentiles (usec): 00:20:48.357 | 1.00th=[ 4817], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 6194], 00:20:48.357 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 6980], 60.00th=[ 7242], 00:20:48.357 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9110], 00:20:48.357 | 99.00th=[10159], 99.50th=[10814], 99.90th=[11600], 99.95th=[11863], 00:20:48.357 | 99.99th=[11994] 00:20:48.357 bw ( KiB/s): min=99712, max=102688, per=85.94%, avg=101232.00, stdev=1544.97, samples=4 00:20:48.357 iops : min= 6232, max= 6418, avg=6327.00, stdev=96.56, samples=4 00:20:48.357 lat (msec) : 4=4.27%, 10=93.55%, 20=2.18% 00:20:48.357 cpu : usr=81.98%, sys=15.78%, ctx=28, majf=0, minf=32 00:20:48.357 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:20:48.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:48.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:48.357 issued rwts: total=24743,12721,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:48.357 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:48.357 00:20:48.357 Run status group 0 (all jobs): 00:20:48.357 READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=387MiB (405MB), run=2004-2004msec 00:20:48.357 WRITE: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=199MiB (208MB), run=1728-1728msec 00:20:48.357 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.617 rmmod nvme_tcp 00:20:48.617 rmmod nvme_fabrics 00:20:48.617 rmmod nvme_keyring 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3817829 ']' 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3817829 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3817829 ']' 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3817829 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3817829 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3817829' 00:20:48.617 killing process with pid 3817829 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3817829 00:20:48.617 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3817829 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.876 19:27:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.782 00:20:50.782 real 0m15.425s 00:20:50.782 user 1m2.499s 00:20:50.782 sys 0m5.989s 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.782 ************************************ 00:20:50.782 END TEST nvmf_fio_host 00:20:50.782 ************************************ 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.782 ************************************ 00:20:50.782 START TEST nvmf_failover 00:20:50.782 ************************************ 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:20:50.782 * Looking for test storage... 00:20:50.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:50.782 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.043 --rc genhtml_branch_coverage=1 00:20:51.043 --rc genhtml_function_coverage=1 00:20:51.043 --rc genhtml_legend=1 00:20:51.043 --rc geninfo_all_blocks=1 00:20:51.043 --rc geninfo_unexecuted_blocks=1 00:20:51.043 00:20:51.043 ' 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.043 --rc genhtml_branch_coverage=1 00:20:51.043 --rc genhtml_function_coverage=1 00:20:51.043 --rc genhtml_legend=1 00:20:51.043 --rc geninfo_all_blocks=1 00:20:51.043 --rc geninfo_unexecuted_blocks=1 00:20:51.043 00:20:51.043 ' 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.043 --rc genhtml_branch_coverage=1 00:20:51.043 --rc genhtml_function_coverage=1 00:20:51.043 --rc genhtml_legend=1 00:20:51.043 --rc geninfo_all_blocks=1 00:20:51.043 --rc geninfo_unexecuted_blocks=1 00:20:51.043 00:20:51.043 ' 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.043 --rc genhtml_branch_coverage=1 00:20:51.043 --rc genhtml_function_coverage=1 00:20:51.043 --rc genhtml_legend=1 00:20:51.043 --rc geninfo_all_blocks=1 00:20:51.043 --rc geninfo_unexecuted_blocks=1 00:20:51.043 00:20:51.043 ' 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.043 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.044 19:27:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:56.319 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:56.319 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:56.319 Found net devices under 0000:31:00.0: cvl_0_0 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:56.319 Found net devices under 0000:31:00.1: cvl_0_1 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:56.319 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:56.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:56.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:20:56.320 00:20:56.320 --- 10.0.0.2 ping statistics --- 00:20:56.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.320 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:56.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:56.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:20:56.320 00:20:56.320 --- 10.0.0.1 ping statistics --- 00:20:56.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:56.320 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3824177 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3824177 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3824177 ']' 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:56.320 19:27:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:56.320 [2024-11-26 19:27:30.004282] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:20:56.320 [2024-11-26 19:27:30.004332] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.320 [2024-11-26 19:27:30.077293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:56.320 [2024-11-26 19:27:30.106412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.320 [2024-11-26 19:27:30.106441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.320 [2024-11-26 19:27:30.106447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.320 [2024-11-26 19:27:30.106452] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.320 [2024-11-26 19:27:30.106456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.320 [2024-11-26 19:27:30.107611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.320 [2024-11-26 19:27:30.107762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.320 [2024-11-26 19:27:30.107765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.581 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.581 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:56.581 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.581 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.581 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:56.581 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.581 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:56.581 [2024-11-26 19:27:30.347692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.581 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:20:56.841 Malloc0 00:20:56.841 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.841 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:57.100 19:27:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.359 [2024-11-26 19:27:31.002717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.359 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:57.359 [2024-11-26 19:27:31.163171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:57.359 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:20:57.619 [2024-11-26 19:27:31.323589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3824530 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3824530 /var/tmp/bdevperf.sock 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3824530 ']' 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:57.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:20:57.619 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:20:57.879 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.879 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:20:57.879 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:58.138 NVMe0n1 00:20:58.138 19:27:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:20:58.397 00:20:58.657 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3824675 00:20:58.657 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:20:58.657 19:27:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:59.595 19:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.595 [2024-11-26 19:27:33.417774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417932] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.595 [2024-11-26 19:27:33.417951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd17370 is same with the state(6) to be set 00:20:59.596 19:27:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:02.888 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:03.149 00:21:03.149 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:03.149 [2024-11-26 19:27:36.937043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.149 [2024-11-26 19:27:36.937265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 [2024-11-26 19:27:36.937314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd181e0 is same with the state(6) to be set 00:21:03.150 19:27:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:06.447 19:27:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:06.447 [2024-11-26 19:27:40.116483] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.447 19:27:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:07.386 19:27:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:07.645 19:27:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3824675 00:21:14.224 { 00:21:14.224 "results": [ 00:21:14.224 { 00:21:14.224 "job": "NVMe0n1", 00:21:14.224 "core_mask": "0x1", 00:21:14.224 "workload": "verify", 00:21:14.224 "status": "finished", 00:21:14.224 "verify_range": { 00:21:14.224 "start": 0, 00:21:14.224 "length": 16384 00:21:14.224 }, 00:21:14.224 "queue_depth": 128, 00:21:14.224 "io_size": 4096, 00:21:14.224 "runtime": 15.007803, 00:21:14.225 "iops": 12638.558755068947, 00:21:14.225 "mibps": 49.369370136988074, 00:21:14.225 "io_failed": 7181, 00:21:14.225 "io_timeout": 0, 00:21:14.225 "avg_latency_us": 9737.814005222037, 00:21:14.225 "min_latency_us": 529.0666666666667, 00:21:14.225 "max_latency_us": 12451.84 00:21:14.225 } 00:21:14.225 ], 00:21:14.225 "core_count": 1 00:21:14.225 } 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3824530 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3824530 ']' 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3824530 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824530 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824530' 00:21:14.225 killing process with pid 3824530 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3824530 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3824530 00:21:14.225 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:14.225 [2024-11-26 19:27:31.375289] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:21:14.225 [2024-11-26 19:27:31.375347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3824530 ] 00:21:14.225 [2024-11-26 19:27:31.439944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.225 [2024-11-26 19:27:31.469412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.225 Running I/O for 15 seconds... 00:21:14.225 11394.00 IOPS, 44.51 MiB/s [2024-11-26T18:27:48.090Z] [2024-11-26 19:27:33.418589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.225 [2024-11-26 19:27:33.418620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.225 [2024-11-26 19:27:33.418639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.225 [2024-11-26 19:27:33.418655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.225 [2024-11-26 19:27:33.418671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa7d90 is same with the state(6) to be set 00:21:14.225 [2024-11-26 19:27:33.418725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.418992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.418999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.225 [2024-11-26 19:27:33.419166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.225 [2024-11-26 19:27:33.419173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.226 [2024-11-26 19:27:33.419309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.226 [2024-11-26 19:27:33.419325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.226 [2024-11-26 19:27:33.419342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.226 [2024-11-26 19:27:33.419358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.226 [2024-11-26 19:27:33.419375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.226 [2024-11-26 19:27:33.419392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.226 [2024-11-26 19:27:33.419408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.226 [2024-11-26 19:27:33.419835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.226 [2024-11-26 19:27:33.419843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.419985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.419992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.227 [2024-11-26 19:27:33.420353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.227 [2024-11-26 19:27:33.420503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.227 [2024-11-26 19:27:33.420512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:33.420622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.228 [2024-11-26 19:27:33.420873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.228 [2024-11-26 19:27:33.420899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.228 [2024-11-26 19:27:33.420906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97784 len:8 PRP1 0x0 PRP2 0x0 00:21:14.228 [2024-11-26 19:27:33.420913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:33.420953] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:14.228 [2024-11-26 19:27:33.420963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:14.228 [2024-11-26 19:27:33.424516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:14.228 [2024-11-26 19:27:33.424539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa7d90 (9): Bad file descriptor 00:21:14.228 [2024-11-26 19:27:33.454059] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:21:14.228 11235.00 IOPS, 43.89 MiB/s [2024-11-26T18:27:48.093Z] 11755.67 IOPS, 45.92 MiB/s [2024-11-26T18:27:48.093Z] 12032.25 IOPS, 47.00 MiB/s [2024-11-26T18:27:48.093Z] [2024-11-26 19:27:36.937841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.937994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.937999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.938005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.938010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.938016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.228 [2024-11-26 19:27:36.938022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.228 [2024-11-26 19:27:36.938029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.229 [2024-11-26 19:27:36.938416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.229 [2024-11-26 19:27:36.938423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.230 [2024-11-26 19:27:36.938881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.230 [2024-11-26 19:27:36.938887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.938991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.938996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.231 [2024-11-26 19:27:36.939346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.231 [2024-11-26 19:27:36.939350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:36.939357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.232 [2024-11-26 19:27:36.939362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:36.939377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.232 [2024-11-26 19:27:36.939382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.232 [2024-11-26 19:27:36.939388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62536 len:8 PRP1 0x0 PRP2 0x0 00:21:14.232 [2024-11-26 19:27:36.939394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:36.939426] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:14.232 [2024-11-26 19:27:36.939443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.232 [2024-11-26 19:27:36.939449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:36.939456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.232 [2024-11-26 19:27:36.939461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:36.939467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.232 [2024-11-26 19:27:36.939473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:36.939478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.232 [2024-11-26 19:27:36.939483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:36.939489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:21:14.232 [2024-11-26 19:27:36.941902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:21:14.232 [2024-11-26 19:27:36.941923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa7d90 (9): Bad file descriptor 00:21:14.232 [2024-11-26 19:27:37.005664] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:21:14.232 12002.20 IOPS, 46.88 MiB/s [2024-11-26T18:27:48.097Z] 12149.83 IOPS, 47.46 MiB/s [2024-11-26T18:27:48.097Z] 12242.86 IOPS, 47.82 MiB/s [2024-11-26T18:27:48.097Z] 12316.50 IOPS, 48.11 MiB/s [2024-11-26T18:27:48.097Z] [2024-11-26 19:27:41.284485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.232 [2024-11-26 19:27:41.284527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.284536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.232 [2024-11-26 19:27:41.284542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.284548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.232 [2024-11-26 19:27:41.284553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.284559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:14.232 [2024-11-26 19:27:41.284564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.284569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa7d90 is same with the state(6) to be set 00:21:14.232 [2024-11-26 19:27:41.285823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.232 [2024-11-26 19:27:41.285833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.285989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.285996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.232 [2024-11-26 19:27:41.286038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.232 [2024-11-26 19:27:41.286049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.232 [2024-11-26 19:27:41.286061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.232 [2024-11-26 19:27:41.286072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.232 [2024-11-26 19:27:41.286141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.232 [2024-11-26 19:27:41.286146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.233 [2024-11-26 19:27:41.286558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.233 [2024-11-26 19:27:41.286563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.234 [2024-11-26 19:27:41.286992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.234 [2024-11-26 19:27:41.286997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.235 [2024-11-26 19:27:41.287127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.235 [2024-11-26 19:27:41.287139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.235 [2024-11-26 19:27:41.287150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:14.235 [2024-11-26 19:27:41.287162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:14.235 [2024-11-26 19:27:41.287336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aca740 is same with the state(6) to be set 00:21:14.235 [2024-11-26 19:27:41.287349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:14.235 [2024-11-26 19:27:41.287353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:14.235 [2024-11-26 19:27:41.287358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:8 PRP1 0x0 PRP2 0x0 00:21:14.235 [2024-11-26 19:27:41.287363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:14.235 [2024-11-26 19:27:41.287397] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:14.235 [2024-11-26 19:27:41.287405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:21:14.235 [2024-11-26 19:27:41.289893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:21:14.235 [2024-11-26 19:27:41.289913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa7d90 (9): Bad file descriptor 00:21:14.235 [2024-11-26 19:27:41.326581] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:21:14.235 12317.44 IOPS, 48.12 MiB/s [2024-11-26T18:27:48.100Z] 12397.40 IOPS, 48.43 MiB/s [2024-11-26T18:27:48.100Z] 12481.18 IOPS, 48.75 MiB/s [2024-11-26T18:27:48.100Z] 12523.33 IOPS, 48.92 MiB/s [2024-11-26T18:27:48.100Z] 12567.08 IOPS, 49.09 MiB/s [2024-11-26T18:27:48.100Z] 12604.79 IOPS, 49.24 MiB/s [2024-11-26T18:27:48.100Z] 12638.67 IOPS, 49.37 MiB/s 00:21:14.235 Latency(us) 00:21:14.235 [2024-11-26T18:27:48.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.235 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:14.235 Verification LBA range: start 0x0 length 0x4000 00:21:14.235 NVMe0n1 : 15.01 12638.56 49.37 478.48 0.00 9737.81 529.07 12451.84 00:21:14.235 [2024-11-26T18:27:48.100Z] =================================================================================================================== 00:21:14.235 [2024-11-26T18:27:48.100Z] Total : 12638.56 49.37 478.48 0.00 9737.81 529.07 12451.84 00:21:14.235 Received shutdown signal, test time was about 15.000000 seconds 00:21:14.235 00:21:14.235 Latency(us) 00:21:14.235 [2024-11-26T18:27:48.100Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.235 [2024-11-26T18:27:48.100Z] =================================================================================================================== 00:21:14.235 [2024-11-26T18:27:48.100Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3828043 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3828043 /var/tmp/bdevperf.sock 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3828043 ']' 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:14.236 [2024-11-26 19:27:47.905261] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:14.236 19:27:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:14.236 [2024-11-26 19:27:48.061604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:14.236 19:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:14.496 NVMe0n1 00:21:14.497 19:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:14.756 00:21:14.756 19:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:21:15.016 00:21:15.016 19:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:15.016 19:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:15.275 19:27:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:15.275 19:27:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:18.566 19:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:18.566 19:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:18.566 19:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3829024 00:21:18.566 19:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.566 19:27:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3829024 00:21:19.503 { 00:21:19.503 "results": [ 00:21:19.503 { 00:21:19.503 "job": "NVMe0n1", 00:21:19.503 "core_mask": "0x1", 00:21:19.503 "workload": "verify", 00:21:19.503 "status": "finished", 00:21:19.503 "verify_range": { 00:21:19.503 "start": 0, 00:21:19.503 "length": 16384 00:21:19.503 }, 00:21:19.503 "queue_depth": 128, 00:21:19.503 "io_size": 4096, 00:21:19.503 "runtime": 1.007122, 00:21:19.503 "iops": 12858.422316263572, 00:21:19.503 "mibps": 50.22821217290458, 00:21:19.503 "io_failed": 0, 00:21:19.503 "io_timeout": 0, 00:21:19.503 "avg_latency_us": 9913.525868725868, 00:21:19.503 "min_latency_us": 1624.7466666666667, 00:21:19.503 "max_latency_us": 10540.373333333333 00:21:19.503 } 00:21:19.503 ], 00:21:19.503 "core_count": 1 00:21:19.503 } 00:21:19.503 19:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:19.503 [2024-11-26 19:27:47.598401] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:21:19.503 [2024-11-26 19:27:47.598460] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3828043 ] 00:21:19.503 [2024-11-26 19:27:47.663519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.503 [2024-11-26 19:27:47.692351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.503 [2024-11-26 19:27:49.040224] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:19.503 [2024-11-26 19:27:49.040261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.503 [2024-11-26 19:27:49.040269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.503 [2024-11-26 19:27:49.040276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.503 [2024-11-26 19:27:49.040282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.503 [2024-11-26 19:27:49.040288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.503 [2024-11-26 19:27:49.040293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.503 [2024-11-26 19:27:49.040298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.503 [2024-11-26 19:27:49.040303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.503 [2024-11-26 19:27:49.040308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:21:19.503 [2024-11-26 19:27:49.040329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:21:19.503 [2024-11-26 19:27:49.040339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1096d90 (9): Bad file descriptor 00:21:19.503 [2024-11-26 19:27:49.131288] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:21:19.503 Running I/O for 1 seconds... 00:21:19.503 12815.00 IOPS, 50.06 MiB/s 00:21:19.503 Latency(us) 00:21:19.503 [2024-11-26T18:27:53.368Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.503 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:19.503 Verification LBA range: start 0x0 length 0x4000 00:21:19.503 NVMe0n1 : 1.01 12858.42 50.23 0.00 0.00 9913.53 1624.75 10540.37 00:21:19.503 [2024-11-26T18:27:53.368Z] =================================================================================================================== 00:21:19.503 [2024-11-26T18:27:53.368Z] Total : 12858.42 50.23 0.00 0.00 9913.53 1624.75 10540.37 00:21:19.503 19:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:19.503 19:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:19.760 19:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.018 19:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:20.018 19:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:20.018 19:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.277 19:27:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:23.603 19:27:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:23.603 19:27:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3828043 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3828043 ']' 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3828043 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3828043 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3828043' 00:21:23.603 killing process with pid 3828043 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3828043 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3828043 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.603 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:23.603 rmmod nvme_tcp 00:21:23.603 rmmod nvme_fabrics 00:21:23.861 rmmod nvme_keyring 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3824177 ']' 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3824177 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3824177 ']' 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3824177 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3824177 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3824177' 00:21:23.861 killing process with pid 3824177 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3824177 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3824177 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.861 19:27:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:26.401 00:21:26.401 real 0m35.111s 00:21:26.401 user 1m52.057s 00:21:26.401 sys 0m6.547s 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:26.401 ************************************ 00:21:26.401 END TEST nvmf_failover 00:21:26.401 ************************************ 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.401 ************************************ 00:21:26.401 START TEST nvmf_host_discovery 00:21:26.401 ************************************ 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:26.401 * Looking for test storage... 00:21:26.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.401 --rc genhtml_branch_coverage=1 00:21:26.401 --rc genhtml_function_coverage=1 00:21:26.401 --rc genhtml_legend=1 00:21:26.401 --rc geninfo_all_blocks=1 00:21:26.401 --rc geninfo_unexecuted_blocks=1 00:21:26.401 00:21:26.401 ' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.401 --rc genhtml_branch_coverage=1 00:21:26.401 --rc genhtml_function_coverage=1 00:21:26.401 --rc genhtml_legend=1 00:21:26.401 --rc geninfo_all_blocks=1 00:21:26.401 --rc geninfo_unexecuted_blocks=1 00:21:26.401 00:21:26.401 ' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.401 --rc genhtml_branch_coverage=1 00:21:26.401 --rc genhtml_function_coverage=1 00:21:26.401 --rc genhtml_legend=1 00:21:26.401 --rc geninfo_all_blocks=1 00:21:26.401 --rc geninfo_unexecuted_blocks=1 00:21:26.401 00:21:26.401 ' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:26.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.401 --rc genhtml_branch_coverage=1 00:21:26.401 --rc genhtml_function_coverage=1 00:21:26.401 --rc genhtml_legend=1 00:21:26.401 --rc geninfo_all_blocks=1 00:21:26.401 --rc geninfo_unexecuted_blocks=1 00:21:26.401 00:21:26.401 ' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.401 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:26.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.402 19:27:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:31.680 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:31.680 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:31.680 Found net devices under 0000:31:00.0: cvl_0_0 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:31.680 Found net devices under 0000:31:00.1: cvl_0_1 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.680 19:28:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:31.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:21:31.680 00:21:31.680 --- 10.0.0.2 ping statistics --- 00:21:31.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.680 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:21:31.680 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:21:31.681 00:21:31.681 --- 10.0.0.1 ping statistics --- 00:21:31.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.681 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3834552 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3834552 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3834552 ']' 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:31.681 [2024-11-26 19:28:05.248051] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:21:31.681 [2024-11-26 19:28:05.248123] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.681 [2024-11-26 19:28:05.315068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.681 [2024-11-26 19:28:05.350466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.681 [2024-11-26 19:28:05.350506] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.681 [2024-11-26 19:28:05.350513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.681 [2024-11-26 19:28:05.350518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.681 [2024-11-26 19:28:05.350522] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.681 [2024-11-26 19:28:05.351173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 [2024-11-26 19:28:05.453313] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 [2024-11-26 19:28:05.461492] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 null0 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 null1 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3834578 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3834578 /tmp/host.sock 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3834578 ']' 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:31.681 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:31.681 19:28:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:31.681 [2024-11-26 19:28:05.525901] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:21:31.681 [2024-11-26 19:28:05.525946] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3834578 ] 00:21:31.940 [2024-11-26 19:28:05.602870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.940 [2024-11-26 19:28:05.639326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:32.507 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 [2024-11-26 19:28:06.516117] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:32.768 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.769 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:33.029 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.029 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:21:33.029 19:28:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:21:33.597 [2024-11-26 19:28:07.362275] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:33.597 [2024-11-26 19:28:07.362295] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:33.597 [2024-11-26 19:28:07.362310] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:33.597 [2024-11-26 19:28:07.448573] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:33.857 [2024-11-26 19:28:07.670854] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.857 [2024-11-26 19:28:07.671786] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x10196a0:1 started. 00:21:33.857 [2024-11-26 19:28:07.673423] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:33.857 [2024-11-26 19:28:07.673440] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:33.857 [2024-11-26 19:28:07.680596] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x10196a0 was disconnected and freed. delete nvme_qpair. 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:21:33.857 19:28:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:35.240 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:35.241 [2024-11-26 19:28:08.827029] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1019bd0:1 started. 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.241 [2024-11-26 19:28:08.834060] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1019bd0 was disconnected and freed. delete nvme_qpair. 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.241 [2024-11-26 19:28:08.890518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:35.241 [2024-11-26 19:28:08.891549] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:35.241 [2024-11-26 19:28:08.891567] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:35.241 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:35.242 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.242 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:35.242 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:35.242 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.242 [2024-11-26 19:28:08.979811] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:35.242 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:35.242 19:28:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:21:35.242 [2024-11-26 19:28:09.079588] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:21:35.242 [2024-11-26 19:28:09.079616] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:35.242 [2024-11-26 19:28:09.079622] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:35.242 [2024-11-26 19:28:09.079625] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.180 19:28:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:36.180 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.557 [2024-11-26 19:28:10.062230] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:36.557 [2024-11-26 19:28:10.062250] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.557 [2024-11-26 19:28:10.066694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.557 [2024-11-26 19:28:10.066710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.557 [2024-11-26 19:28:10.066717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:36.557 id:0 cdw10:00000000 cdw11:00000000 00:21:36.557 [2024-11-26 19:28:10.066724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.557 [2024-11-26 19:28:10.066730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.557 [2024-11-26 19:28:10.066735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.557 [2024-11-26 19:28:10.066741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.557 [2024-11-26 19:28:10.066747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.557 [2024-11-26 19:28:10.066752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:36.557 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:36.558 [2024-11-26 19:28:10.076709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.558 [2024-11-26 19:28:10.086741] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.086751] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.086754] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.086758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.086771] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.087061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.087071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.087077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.087086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.087094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.087104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.087110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.087115] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.087119] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.558 [2024-11-26 19:28:10.087123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.558 [2024-11-26 19:28:10.096799] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.096808] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.096811] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.096814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.096828] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.097110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.097119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.097124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.097133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.097140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.097146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.097151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.097155] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.097159] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.558 [2024-11-26 19:28:10.097163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.558 [2024-11-26 19:28:10.106858] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.106869] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.106872] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.106875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.106886] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.107305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.107335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.107347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.107361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.107970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.107980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.107985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.107990] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.107994] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.558 [2024-11-26 19:28:10.107997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.558 [2024-11-26 19:28:10.116916] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.116926] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.116930] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.116933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.116945] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.117363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.117393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.117402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.117417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.117438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.117446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.117452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.117457] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.117461] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.558 [2024-11-26 19:28:10.117464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.558 [2024-11-26 19:28:10.126976] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.126986] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.126990] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.126993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.127005] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.127393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.127423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.127432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.127446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.127463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.127469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.127474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.127479] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.127483] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.558 [2024-11-26 19:28:10.127486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:21:36.558 [2024-11-26 19:28:10.137035] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.137047] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.137051] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.137054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.137066] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.137265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.137275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.137280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.137288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.137296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.137300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.137305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.137313] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.137316] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.558 [2024-11-26 19:28:10.137320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:36.558 [2024-11-26 19:28:10.147095] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.147108] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.147111] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.147115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.147125] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.147439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.147448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.147453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.147461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.147468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.147472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.147477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.147481] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.147485] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.558 [2024-11-26 19:28:10.147488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.558 [2024-11-26 19:28:10.157155] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.157172] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.157176] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.157179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.157190] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.157528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.157540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.157546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.157554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.157561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.157566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.157571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.157575] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.157579] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.558 [2024-11-26 19:28:10.157582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:36.558 [2024-11-26 19:28:10.167219] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.558 [2024-11-26 19:28:10.167229] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.558 [2024-11-26 19:28:10.167232] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.558 [2024-11-26 19:28:10.167235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.558 [2024-11-26 19:28:10.167245] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.558 19:28:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:21:36.558 [2024-11-26 19:28:10.167537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.558 [2024-11-26 19:28:10.167546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.558 [2024-11-26 19:28:10.167552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.558 [2024-11-26 19:28:10.167560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.558 [2024-11-26 19:28:10.167570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.558 [2024-11-26 19:28:10.167576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.558 [2024-11-26 19:28:10.167581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.558 [2024-11-26 19:28:10.167585] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.558 [2024-11-26 19:28:10.167588] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.559 [2024-11-26 19:28:10.167591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.559 [2024-11-26 19:28:10.177274] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.559 [2024-11-26 19:28:10.177282] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.559 [2024-11-26 19:28:10.177285] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.559 [2024-11-26 19:28:10.177288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.559 [2024-11-26 19:28:10.177303] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.559 [2024-11-26 19:28:10.177352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.559 [2024-11-26 19:28:10.177360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.559 [2024-11-26 19:28:10.177366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.559 [2024-11-26 19:28:10.177374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.559 [2024-11-26 19:28:10.177381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.559 [2024-11-26 19:28:10.177386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.559 [2024-11-26 19:28:10.177391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.559 [2024-11-26 19:28:10.177395] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.559 [2024-11-26 19:28:10.177398] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.559 [2024-11-26 19:28:10.177401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.559 [2024-11-26 19:28:10.187332] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:21:36.559 [2024-11-26 19:28:10.187340] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:21:36.559 [2024-11-26 19:28:10.187343] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:21:36.559 [2024-11-26 19:28:10.187347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.559 [2024-11-26 19:28:10.187356] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:21:36.559 [2024-11-26 19:28:10.187637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:36.559 [2024-11-26 19:28:10.187645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfe9d90 with addr=10.0.0.2, port=4420 00:21:36.559 [2024-11-26 19:28:10.187650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe9d90 is same with the state(6) to be set 00:21:36.559 [2024-11-26 19:28:10.187657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe9d90 (9): Bad file descriptor 00:21:36.559 [2024-11-26 19:28:10.187668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:21:36.559 [2024-11-26 19:28:10.187673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:21:36.559 [2024-11-26 19:28:10.187678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:21:36.559 [2024-11-26 19:28:10.187681] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:21:36.559 [2024-11-26 19:28:10.187685] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:21:36.559 [2024-11-26 19:28:10.187688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:21:36.559 [2024-11-26 19:28:10.191755] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:36.559 [2024-11-26 19:28:10.191768] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:37.546 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:37.546 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:37.546 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:21:37.546 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:37.546 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:37.546 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:37.546 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.547 19:28:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.926 [2024-11-26 19:28:12.396282] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:38.926 [2024-11-26 19:28:12.396296] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:38.926 [2024-11-26 19:28:12.396305] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:38.927 [2024-11-26 19:28:12.484558] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:38.927 [2024-11-26 19:28:12.546218] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:21:38.927 [2024-11-26 19:28:12.546877] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1001420:1 started. 00:21:38.927 [2024-11-26 19:28:12.548244] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:38.927 [2024-11-26 19:28:12.548265] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.927 [2024-11-26 19:28:12.553064] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1001420 was disconnected and freed. delete nvme_qpair. 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.927 request: 00:21:38.927 { 00:21:38.927 "name": "nvme", 00:21:38.927 "trtype": "tcp", 00:21:38.927 "traddr": "10.0.0.2", 00:21:38.927 "adrfam": "ipv4", 00:21:38.927 "trsvcid": "8009", 00:21:38.927 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:38.927 "wait_for_attach": true, 00:21:38.927 "method": "bdev_nvme_start_discovery", 00:21:38.927 "req_id": 1 00:21:38.927 } 00:21:38.927 Got JSON-RPC error response 00:21:38.927 response: 00:21:38.927 { 00:21:38.927 "code": -17, 00:21:38.927 "message": "File exists" 00:21:38.927 } 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.927 request: 00:21:38.927 { 00:21:38.927 "name": "nvme_second", 00:21:38.927 "trtype": "tcp", 00:21:38.927 "traddr": "10.0.0.2", 00:21:38.927 "adrfam": "ipv4", 00:21:38.927 "trsvcid": "8009", 00:21:38.927 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:38.927 "wait_for_attach": true, 00:21:38.927 "method": "bdev_nvme_start_discovery", 00:21:38.927 "req_id": 1 00:21:38.927 } 00:21:38.927 Got JSON-RPC error response 00:21:38.927 response: 00:21:38.927 { 00:21:38.927 "code": -17, 00:21:38.927 "message": "File exists" 00:21:38.927 } 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.927 19:28:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:39.864 [2024-11-26 19:28:13.711445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:39.864 [2024-11-26 19:28:13.711466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1010b70 with addr=10.0.0.2, port=8010 00:21:39.864 [2024-11-26 19:28:13.711476] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:39.864 [2024-11-26 19:28:13.711481] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:39.864 [2024-11-26 19:28:13.711490] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:41.241 [2024-11-26 19:28:14.713783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:41.241 [2024-11-26 19:28:14.713801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x101a400 with addr=10.0.0.2, port=8010 00:21:41.241 [2024-11-26 19:28:14.713810] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:41.241 [2024-11-26 19:28:14.713815] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:41.241 [2024-11-26 19:28:14.713819] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:42.179 [2024-11-26 19:28:15.715782] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:42.179 request: 00:21:42.179 { 00:21:42.179 "name": "nvme_second", 00:21:42.179 "trtype": "tcp", 00:21:42.179 "traddr": "10.0.0.2", 00:21:42.179 "adrfam": "ipv4", 00:21:42.179 "trsvcid": "8010", 00:21:42.179 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:42.179 "wait_for_attach": false, 00:21:42.179 "attach_timeout_ms": 3000, 00:21:42.179 "method": "bdev_nvme_start_discovery", 00:21:42.179 "req_id": 1 00:21:42.179 } 00:21:42.179 Got JSON-RPC error response 00:21:42.179 response: 00:21:42.179 { 00:21:42.179 "code": -110, 00:21:42.179 "message": "Connection timed out" 00:21:42.179 } 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3834578 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:42.179 rmmod nvme_tcp 00:21:42.179 rmmod nvme_fabrics 00:21:42.179 rmmod nvme_keyring 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3834552 ']' 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3834552 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3834552 ']' 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3834552 00:21:42.179 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3834552 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3834552' 00:21:42.180 killing process with pid 3834552 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3834552 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3834552 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.180 19:28:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:44.717 00:21:44.717 real 0m18.268s 00:21:44.717 user 0m23.350s 00:21:44.717 sys 0m5.157s 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.717 ************************************ 00:21:44.717 END TEST nvmf_host_discovery 00:21:44.717 ************************************ 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.717 ************************************ 00:21:44.717 START TEST nvmf_host_multipath_status 00:21:44.717 ************************************ 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:44.717 * Looking for test storage... 00:21:44.717 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.717 --rc genhtml_branch_coverage=1 00:21:44.717 --rc genhtml_function_coverage=1 00:21:44.717 --rc genhtml_legend=1 00:21:44.717 --rc geninfo_all_blocks=1 00:21:44.717 --rc geninfo_unexecuted_blocks=1 00:21:44.717 00:21:44.717 ' 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.717 --rc genhtml_branch_coverage=1 00:21:44.717 --rc genhtml_function_coverage=1 00:21:44.717 --rc genhtml_legend=1 00:21:44.717 --rc geninfo_all_blocks=1 00:21:44.717 --rc geninfo_unexecuted_blocks=1 00:21:44.717 00:21:44.717 ' 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.717 --rc genhtml_branch_coverage=1 00:21:44.717 --rc genhtml_function_coverage=1 00:21:44.717 --rc genhtml_legend=1 00:21:44.717 --rc geninfo_all_blocks=1 00:21:44.717 --rc geninfo_unexecuted_blocks=1 00:21:44.717 00:21:44.717 ' 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.717 --rc genhtml_branch_coverage=1 00:21:44.717 --rc genhtml_function_coverage=1 00:21:44.717 --rc genhtml_legend=1 00:21:44.717 --rc geninfo_all_blocks=1 00:21:44.717 --rc geninfo_unexecuted_blocks=1 00:21:44.717 00:21:44.717 ' 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.717 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:21:44.718 19:28:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:49.995 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:49.995 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:49.995 Found net devices under 0000:31:00.0: cvl_0_0 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:49.995 Found net devices under 0000:31:00.1: cvl_0_1 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.995 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:21:49.996 00:21:49.996 --- 10.0.0.2 ping statistics --- 00:21:49.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.996 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:21:49.996 00:21:49.996 --- 10.0.0.1 ping statistics --- 00:21:49.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.996 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3841432 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3841432 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3841432 ']' 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:49.996 19:28:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:49.996 [2024-11-26 19:28:23.722940] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:21:49.996 [2024-11-26 19:28:23.722993] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.996 [2024-11-26 19:28:23.810852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:50.256 [2024-11-26 19:28:23.862524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.256 [2024-11-26 19:28:23.862577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.256 [2024-11-26 19:28:23.862586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.256 [2024-11-26 19:28:23.862593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.256 [2024-11-26 19:28:23.862599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.256 [2024-11-26 19:28:23.864329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.256 [2024-11-26 19:28:23.864500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.823 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.823 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:50.823 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.823 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.823 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:50.823 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.823 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3841432 00:21:50.823 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:51.081 [2024-11-26 19:28:24.701343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.081 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:51.081 Malloc0 00:21:51.081 19:28:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:51.340 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:51.599 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:51.599 [2024-11-26 19:28:25.377128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.599 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:51.859 [2024-11-26 19:28:25.549549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3841793 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3841793 /var/tmp/bdevperf.sock 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3841793 ']' 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.859 19:28:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:52.796 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.796 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:21:52.796 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:52.796 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:53.056 Nvme0n1 00:21:53.056 19:28:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:53.316 Nvme0n1 00:21:53.576 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:53.576 19:28:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:55.484 19:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:21:55.484 19:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:21:55.743 19:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:55.743 19:28:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:21:56.683 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:21:56.683 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:21:56.683 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.683 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:56.943 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:56.943 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:21:56.943 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:56.943 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:57.202 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:57.202 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:57.202 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:57.202 19:28:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.202 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.202 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:57.202 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:57.202 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.462 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.462 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:57.462 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:57.462 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.722 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.722 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:57.722 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:57.722 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:21:57.722 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:57.722 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:21:57.722 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:21:57.981 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:21:57.981 19:28:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:21:59.363 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:21:59.363 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:21:59.363 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.363 19:28:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:21:59.363 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:21:59.363 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:21:59.363 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:21:59.363 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.363 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.363 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:21:59.363 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:21:59.363 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.623 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.623 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:21:59.623 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.623 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:21:59.881 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.881 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:21:59.881 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:21:59.881 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.881 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:21:59.881 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:21:59.881 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:21:59.881 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:00.140 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:00.140 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:00.140 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:00.140 19:28:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:00.400 19:28:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:01.337 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:01.337 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:01.337 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.337 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:01.597 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.597 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:01.597 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.597 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:01.857 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:01.857 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:01.857 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.857 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:01.857 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:01.857 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:01.857 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:01.857 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:02.117 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.117 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:02.117 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.117 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:02.117 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.117 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:02.117 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.117 19:28:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:02.376 19:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.376 19:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:02.376 19:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:02.635 19:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:02.635 19:28:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:04.015 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.274 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.274 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:04.274 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.274 19:28:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:04.274 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.274 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:04.274 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.274 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:04.533 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.533 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:04.533 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.533 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:04.793 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.793 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:04.793 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:04.793 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:05.052 19:28:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:05.991 19:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:05.991 19:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:05.991 19:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.991 19:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:06.251 19:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.251 19:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:06.251 19:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.251 19:28:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:06.251 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.251 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:06.251 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:06.251 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.511 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.511 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:06.511 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.511 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:06.771 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.771 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:06.771 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.771 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:06.771 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:06.771 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:06.771 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.771 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:07.031 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.031 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:07.031 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:07.031 19:28:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:07.290 19:28:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:08.230 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:08.230 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:08.230 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.230 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.490 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.490 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:08.490 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.490 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.749 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.749 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.749 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.749 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.749 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.749 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.750 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.750 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:09.009 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.009 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:09.009 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.009 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:09.009 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:09.009 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:09.009 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.009 19:28:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:09.270 19:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.270 19:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:09.530 19:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:09.530 19:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:09.530 19:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:09.790 19:28:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:10.728 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:10.728 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:10.728 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.728 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:10.988 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.988 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:10.988 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.988 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.988 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.988 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.988 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.988 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.248 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.248 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.248 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.248 19:28:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:11.507 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.507 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:11.507 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.507 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:11.507 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.507 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:11.507 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.507 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:11.766 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.766 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:11.766 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:11.766 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:12.026 19:28:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:12.966 19:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:12.966 19:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:12.966 19:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.966 19:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:13.226 19:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.226 19:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:13.226 19:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.226 19:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.486 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.486 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.486 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.486 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:13.486 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.486 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:13.486 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:13.486 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.746 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.746 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:13.746 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.746 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:14.006 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.006 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:14.006 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.006 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.006 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.006 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:14.006 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:14.265 19:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:14.265 19:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.640 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.899 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.899 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.899 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.899 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.899 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.899 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.899 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.899 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:16.157 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.158 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:16.158 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.158 19:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:16.416 19:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.416 19:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:16.416 19:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:16.416 19:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:16.675 19:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:17.610 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:17.610 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:17.610 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.610 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.869 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.869 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:17.869 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.869 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.869 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.869 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.869 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:17.869 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.126 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.126 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.126 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.126 19:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.385 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.385 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:18.385 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.385 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.385 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.385 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:18.385 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.385 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.642 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3841793 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3841793 ']' 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3841793 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841793 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841793' 00:22:18.643 killing process with pid 3841793 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3841793 00:22:18.643 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3841793 00:22:18.643 { 00:22:18.643 "results": [ 00:22:18.643 { 00:22:18.643 "job": "Nvme0n1", 00:22:18.643 "core_mask": "0x4", 00:22:18.643 "workload": "verify", 00:22:18.643 "status": "terminated", 00:22:18.643 "verify_range": { 00:22:18.643 "start": 0, 00:22:18.643 "length": 16384 00:22:18.643 }, 00:22:18.643 "queue_depth": 128, 00:22:18.643 "io_size": 4096, 00:22:18.643 "runtime": 25.098921, 00:22:18.643 "iops": 12030.835907248762, 00:22:18.643 "mibps": 46.995452762690476, 00:22:18.643 "io_failed": 0, 00:22:18.643 "io_timeout": 0, 00:22:18.643 "avg_latency_us": 10620.402215473741, 00:22:18.643 "min_latency_us": 271.36, 00:22:18.643 "max_latency_us": 3019898.88 00:22:18.643 } 00:22:18.643 ], 00:22:18.643 "core_count": 1 00:22:18.643 } 00:22:18.905 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3841793 00:22:18.905 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:18.905 [2024-11-26 19:28:25.610309] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:22:18.905 [2024-11-26 19:28:25.610387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3841793 ] 00:22:18.905 [2024-11-26 19:28:25.695869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.905 [2024-11-26 19:28:25.746938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.905 Running I/O for 90 seconds... 00:22:18.905 11942.00 IOPS, 46.65 MiB/s [2024-11-26T18:28:52.770Z] 12385.50 IOPS, 48.38 MiB/s [2024-11-26T18:28:52.770Z] 12556.67 IOPS, 49.05 MiB/s [2024-11-26T18:28:52.770Z] 12629.00 IOPS, 49.33 MiB/s [2024-11-26T18:28:52.770Z] 12699.40 IOPS, 49.61 MiB/s [2024-11-26T18:28:52.770Z] 12743.50 IOPS, 49.78 MiB/s [2024-11-26T18:28:52.770Z] 12762.29 IOPS, 49.85 MiB/s [2024-11-26T18:28:52.770Z] 12769.38 IOPS, 49.88 MiB/s [2024-11-26T18:28:52.770Z] 12769.33 IOPS, 49.88 MiB/s [2024-11-26T18:28:52.770Z] 12779.10 IOPS, 49.92 MiB/s [2024-11-26T18:28:52.770Z] 12785.45 IOPS, 49.94 MiB/s [2024-11-26T18:28:52.770Z] [2024-11-26 19:28:38.581785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.581980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.581991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:18.905 [2024-11-26 19:28:38.582308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.905 [2024-11-26 19:28:38.582313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:107808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.582970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.582975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.906 [2024-11-26 19:28:38.583456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:107096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.906 [2024-11-26 19:28:38.583736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:18.906 [2024-11-26 19:28:38.583749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.583993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.583998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:38.584307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:38.584313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:18.907 12036.58 IOPS, 47.02 MiB/s [2024-11-26T18:28:52.772Z] 11110.69 IOPS, 43.40 MiB/s [2024-11-26T18:28:52.772Z] 10317.07 IOPS, 40.30 MiB/s [2024-11-26T18:28:52.772Z] 10234.53 IOPS, 39.98 MiB/s [2024-11-26T18:28:52.772Z] 10392.81 IOPS, 40.60 MiB/s [2024-11-26T18:28:52.772Z] 10774.59 IOPS, 42.09 MiB/s [2024-11-26T18:28:52.772Z] 11112.00 IOPS, 43.41 MiB/s [2024-11-26T18:28:52.772Z] 11288.58 IOPS, 44.10 MiB/s [2024-11-26T18:28:52.772Z] 11361.50 IOPS, 44.38 MiB/s [2024-11-26T18:28:52.772Z] 11468.43 IOPS, 44.80 MiB/s [2024-11-26T18:28:52.772Z] 11721.18 IOPS, 45.79 MiB/s [2024-11-26T18:28:52.772Z] 11958.39 IOPS, 46.71 MiB/s [2024-11-26T18:28:52.772Z] [2024-11-26 19:28:50.373806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:18.907 [2024-11-26 19:28:50.373840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.374843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.374849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:18.907 [2024-11-26 19:28:50.376871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:18.907 [2024-11-26 19:28:50.376877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:18.907 12000.75 IOPS, 46.88 MiB/s [2024-11-26T18:28:52.772Z] 12030.72 IOPS, 46.99 MiB/s [2024-11-26T18:28:52.772Z] Received shutdown signal, test time was about 25.099530 seconds 00:22:18.907 00:22:18.907 Latency(us) 00:22:18.907 [2024-11-26T18:28:52.772Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:18.907 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:18.907 Verification LBA range: start 0x0 length 0x4000 00:22:18.907 Nvme0n1 : 25.10 12030.84 47.00 0.00 0.00 10620.40 271.36 3019898.88 00:22:18.907 [2024-11-26T18:28:52.772Z] =================================================================================================================== 00:22:18.907 [2024-11-26T18:28:52.772Z] Total : 12030.84 47.00 0.00 0.00 10620.40 271.36 3019898.88 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:18.907 rmmod nvme_tcp 00:22:18.907 rmmod nvme_fabrics 00:22:18.907 rmmod nvme_keyring 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3841432 ']' 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3841432 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3841432 ']' 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3841432 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:18.907 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3841432 00:22:19.204 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:19.204 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3841432' 00:22:19.205 killing process with pid 3841432 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3841432 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3841432 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.205 19:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.105 19:28:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.105 00:22:21.105 real 0m36.882s 00:22:21.105 user 1m37.710s 00:22:21.105 sys 0m9.056s 00:22:21.105 19:28:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.105 19:28:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:21.105 ************************************ 00:22:21.105 END TEST nvmf_host_multipath_status 00:22:21.105 ************************************ 00:22:21.365 19:28:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:21.365 19:28:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:21.365 19:28:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.365 19:28:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:21.365 ************************************ 00:22:21.365 START TEST nvmf_discovery_remove_ifc 00:22:21.365 ************************************ 00:22:21.365 19:28:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:21.365 * Looking for test storage... 00:22:21.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:21.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.365 --rc genhtml_branch_coverage=1 00:22:21.365 --rc genhtml_function_coverage=1 00:22:21.365 --rc genhtml_legend=1 00:22:21.365 --rc geninfo_all_blocks=1 00:22:21.365 --rc geninfo_unexecuted_blocks=1 00:22:21.365 00:22:21.365 ' 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:21.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.365 --rc genhtml_branch_coverage=1 00:22:21.365 --rc genhtml_function_coverage=1 00:22:21.365 --rc genhtml_legend=1 00:22:21.365 --rc geninfo_all_blocks=1 00:22:21.365 --rc geninfo_unexecuted_blocks=1 00:22:21.365 00:22:21.365 ' 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:21.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.365 --rc genhtml_branch_coverage=1 00:22:21.365 --rc genhtml_function_coverage=1 00:22:21.365 --rc genhtml_legend=1 00:22:21.365 --rc geninfo_all_blocks=1 00:22:21.365 --rc geninfo_unexecuted_blocks=1 00:22:21.365 00:22:21.365 ' 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:21.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.365 --rc genhtml_branch_coverage=1 00:22:21.365 --rc genhtml_function_coverage=1 00:22:21.365 --rc genhtml_legend=1 00:22:21.365 --rc geninfo_all_blocks=1 00:22:21.365 --rc geninfo_unexecuted_blocks=1 00:22:21.365 00:22:21.365 ' 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.365 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.366 19:28:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.786 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:26.786 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:26.787 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:26.787 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:26.787 Found net devices under 0000:31:00.0: cvl_0_0 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:26.787 Found net devices under 0000:31:00.1: cvl_0_1 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:26.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:22:26.787 00:22:26.787 --- 10.0.0.2 ping statistics --- 00:22:26.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.787 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:22:26.787 00:22:26.787 --- 10.0.0.1 ping statistics --- 00:22:26.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.787 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:22:26.787 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3852328 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3852328 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3852328 ']' 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:26.788 19:29:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:26.788 [2024-11-26 19:29:00.560221] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:22:26.788 [2024-11-26 19:29:00.560284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.788 [2024-11-26 19:29:00.638266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.047 [2024-11-26 19:29:00.673715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:27.047 [2024-11-26 19:29:00.673750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:27.047 [2024-11-26 19:29:00.673756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:27.047 [2024-11-26 19:29:00.673761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:27.047 [2024-11-26 19:29:00.673765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:27.047 [2024-11-26 19:29:00.674340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:27.616 [2024-11-26 19:29:01.378232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:27.616 [2024-11-26 19:29:01.386369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:27.616 null0 00:22:27.616 [2024-11-26 19:29:01.418369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3852361 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3852361 /tmp/host.sock 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3852361 ']' 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:27.616 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:27.616 19:29:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:27.616 [2024-11-26 19:29:01.475111] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:22:27.616 [2024-11-26 19:29:01.475159] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852361 ] 00:22:27.875 [2024-11-26 19:29:01.556775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.875 [2024-11-26 19:29:01.593125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.442 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:28.702 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.702 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:28.702 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.702 19:29:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:29.636 [2024-11-26 19:29:03.387291] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:29.636 [2024-11-26 19:29:03.387311] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:29.636 [2024-11-26 19:29:03.387324] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:29.636 [2024-11-26 19:29:03.473611] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:29.896 [2024-11-26 19:29:03.696899] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:29.896 [2024-11-26 19:29:03.697850] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x214f690:1 started. 00:22:29.896 [2024-11-26 19:29:03.699404] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:29.896 [2024-11-26 19:29:03.699441] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:29.896 [2024-11-26 19:29:03.699463] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:29.896 [2024-11-26 19:29:03.699477] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:29.896 [2024-11-26 19:29:03.699496] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:29.896 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:30.155 19:29:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:31.091 19:29:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:32.466 19:29:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:33.404 19:29:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:34.341 19:29:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:34.341 19:29:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:34.341 19:29:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:34.341 19:29:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:34.341 19:29:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:34.341 19:29:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.341 19:29:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:34.341 19:29:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.341 19:29:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:34.341 19:29:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.281 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:35.281 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.282 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:35.282 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.282 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:35.282 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.282 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:35.282 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.282 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:35.282 19:29:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:35.282 [2024-11-26 19:29:09.140374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:35.282 [2024-11-26 19:29:09.140412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.282 [2024-11-26 19:29:09.140422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.282 [2024-11-26 19:29:09.140430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.282 [2024-11-26 19:29:09.140435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.282 [2024-11-26 19:29:09.140441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.282 [2024-11-26 19:29:09.140446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.282 [2024-11-26 19:29:09.140452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.282 [2024-11-26 19:29:09.140457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.282 [2024-11-26 19:29:09.140463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:35.282 [2024-11-26 19:29:09.140468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.282 [2024-11-26 19:29:09.140473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c0a0 is same with the state(6) to be set 00:22:35.541 [2024-11-26 19:29:09.150397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212c0a0 (9): Bad file descriptor 00:22:35.541 [2024-11-26 19:29:09.160429] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:35.541 [2024-11-26 19:29:09.160440] bdev_nvme.c:2342:bdev_nvme_reset_destroy_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting qpair 0x214f690:1. 00:22:35.541 [2024-11-26 19:29:09.160461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.541 [2024-11-26 19:29:09.160468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.541 [2024-11-26 19:29:09.160478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:64 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.541 [2024-11-26 19:29:09.160483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.541 [2024-11-26 19:29:09.160490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.541 [2024-11-26 19:29:09.160495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:35.541 [2024-11-26 19:29:09.160543] bdev_nvme.c:1776:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x214f690 was disconnected and freed in a reset ctrlr sequence. 00:22:35.541 [2024-11-26 19:29:09.160550] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:35.541 [2024-11-26 19:29:09.160554] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:35.541 [2024-11-26 19:29:09.160561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:35.541 [2024-11-26 19:29:09.160574] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.479 [2024-11-26 19:29:10.223159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:36.479 [2024-11-26 19:29:10.223253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x212c0a0 with addr=10.0.0.2, port=4420 00:22:36.479 [2024-11-26 19:29:10.223286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x212c0a0 is same with the state(6) to be set 00:22:36.479 [2024-11-26 19:29:10.223356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212c0a0 (9): Bad file descriptor 00:22:36.479 [2024-11-26 19:29:10.224202] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:22:36.479 [2024-11-26 19:29:10.224236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:36.479 [2024-11-26 19:29:10.224243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:36.479 [2024-11-26 19:29:10.224251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:36.479 [2024-11-26 19:29:10.224257] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:36.479 [2024-11-26 19:29:10.224262] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:36.479 [2024-11-26 19:29:10.224266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:36.479 [2024-11-26 19:29:10.224277] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:36.479 [2024-11-26 19:29:10.224282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.479 19:29:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.418 [2024-11-26 19:29:11.226598] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev nvme0n1: Input/output error 00:22:37.418 [2024-11-26 19:29:11.226625] vbdev_gpt.c: 467:gpt_bdev_complete: *ERROR*: Gpt: bdev=nvme0n1 io error 00:22:37.418 [2024-11-26 19:29:11.226675] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:37.418 [2024-11-26 19:29:11.226703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:37.418 [2024-11-26 19:29:11.226709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:37.418 [2024-11-26 19:29:11.226714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:37.418 [2024-11-26 19:29:11.226720] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:22:37.418 [2024-11-26 19:29:11.226725] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:37.418 [2024-11-26 19:29:11.226728] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:37.418 [2024-11-26 19:29:11.226732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:37.418 [2024-11-26 19:29:11.226745] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:37.418 [2024-11-26 19:29:11.226763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.418 [2024-11-26 19:29:11.226771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.418 [2024-11-26 19:29:11.226779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.418 [2024-11-26 19:29:11.226784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.418 [2024-11-26 19:29:11.226790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.418 [2024-11-26 19:29:11.226795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.418 [2024-11-26 19:29:11.226800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.418 [2024-11-26 19:29:11.226806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.418 [2024-11-26 19:29:11.226812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:37.418 [2024-11-26 19:29:11.226817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.418 [2024-11-26 19:29:11.226822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:22:37.418 [2024-11-26 19:29:11.227023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x211b390 (9): Bad file descriptor 00:22:37.418 [2024-11-26 19:29:11.227842] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:37.418 [2024-11-26 19:29:11.227854] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:37.418 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:37.679 19:29:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:38.618 19:29:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.557 [2024-11-26 19:29:13.283253] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:39.557 [2024-11-26 19:29:13.283266] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:39.557 [2024-11-26 19:29:13.283279] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:39.557 [2024-11-26 19:29:13.370526] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:39.818 19:29:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.818 [2024-11-26 19:29:13.594674] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:22:39.818 [2024-11-26 19:29:13.595494] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x21386b0:1 started. 00:22:39.818 [2024-11-26 19:29:13.596390] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:39.818 [2024-11-26 19:29:13.596418] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:39.818 [2024-11-26 19:29:13.596431] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:39.818 [2024-11-26 19:29:13.596442] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:39.818 [2024-11-26 19:29:13.596447] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:39.818 [2024-11-26 19:29:13.600485] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x21386b0 was disconnected and freed. delete nvme_qpair. 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3852361 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3852361 ']' 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3852361 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852361 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852361' 00:22:40.758 killing process with pid 3852361 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3852361 00:22:40.758 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3852361 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.018 rmmod nvme_tcp 00:22:41.018 rmmod nvme_fabrics 00:22:41.018 rmmod nvme_keyring 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3852328 ']' 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3852328 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3852328 ']' 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3852328 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3852328 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3852328' 00:22:41.018 killing process with pid 3852328 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3852328 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3852328 00:22:41.018 [2024-11-26 19:29:14.739430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a75de0 is same with the state(6) to be set 00:22:41.018 [2024-11-26 19:29:14.739460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a75de0 is same with the state(6) to be set 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.018 19:29:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.555 00:22:43.555 real 0m21.897s 00:22:43.555 user 0m27.863s 00:22:43.555 sys 0m5.453s 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.555 ************************************ 00:22:43.555 END TEST nvmf_discovery_remove_ifc 00:22:43.555 ************************************ 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:43.555 ************************************ 00:22:43.555 START TEST nvmf_identify_kernel_target 00:22:43.555 ************************************ 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:43.555 * Looking for test storage... 00:22:43.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:22:43.555 19:29:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.555 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:43.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.555 --rc genhtml_branch_coverage=1 00:22:43.555 --rc genhtml_function_coverage=1 00:22:43.556 --rc genhtml_legend=1 00:22:43.556 --rc geninfo_all_blocks=1 00:22:43.556 --rc geninfo_unexecuted_blocks=1 00:22:43.556 00:22:43.556 ' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.556 --rc genhtml_branch_coverage=1 00:22:43.556 --rc genhtml_function_coverage=1 00:22:43.556 --rc genhtml_legend=1 00:22:43.556 --rc geninfo_all_blocks=1 00:22:43.556 --rc geninfo_unexecuted_blocks=1 00:22:43.556 00:22:43.556 ' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.556 --rc genhtml_branch_coverage=1 00:22:43.556 --rc genhtml_function_coverage=1 00:22:43.556 --rc genhtml_legend=1 00:22:43.556 --rc geninfo_all_blocks=1 00:22:43.556 --rc geninfo_unexecuted_blocks=1 00:22:43.556 00:22:43.556 ' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:43.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.556 --rc genhtml_branch_coverage=1 00:22:43.556 --rc genhtml_function_coverage=1 00:22:43.556 --rc genhtml_legend=1 00:22:43.556 --rc geninfo_all_blocks=1 00:22:43.556 --rc geninfo_unexecuted_blocks=1 00:22:43.556 00:22:43.556 ' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:43.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.556 19:29:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:48.837 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:48.837 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:48.837 Found net devices under 0000:31:00.0: cvl_0_0 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:48.837 Found net devices under 0000:31:00.1: cvl_0_1 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.837 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:48.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:22:48.838 00:22:48.838 --- 10.0.0.2 ping statistics --- 00:22:48.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.838 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:22:48.838 00:22:48.838 --- 10.0.0.1 ping statistics --- 00:22:48.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.838 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:48.838 19:29:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:51.380 Waiting for block devices as requested 00:22:51.380 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:51.380 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:51.380 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:51.380 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:51.380 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:51.380 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:51.380 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:51.380 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:51.380 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:22:51.639 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:22:51.639 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:22:51.899 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:22:51.899 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:22:51.899 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:22:51.899 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:22:52.158 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:22:52.158 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:52.419 No valid GPT data, bailing 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:52.419 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:22:52.681 00:22:52.681 Discovery Log Number of Records 2, Generation counter 2 00:22:52.681 =====Discovery Log Entry 0====== 00:22:52.681 trtype: tcp 00:22:52.681 adrfam: ipv4 00:22:52.681 subtype: current discovery subsystem 00:22:52.681 treq: not specified, sq flow control disable supported 00:22:52.681 portid: 1 00:22:52.681 trsvcid: 4420 00:22:52.681 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:52.681 traddr: 10.0.0.1 00:22:52.681 eflags: none 00:22:52.681 sectype: none 00:22:52.681 =====Discovery Log Entry 1====== 00:22:52.681 trtype: tcp 00:22:52.681 adrfam: ipv4 00:22:52.681 subtype: nvme subsystem 00:22:52.681 treq: not specified, sq flow control disable supported 00:22:52.681 portid: 1 00:22:52.681 trsvcid: 4420 00:22:52.681 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:52.681 traddr: 10.0.0.1 00:22:52.681 eflags: none 00:22:52.681 sectype: none 00:22:52.681 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:52.681 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:52.681 ===================================================== 00:22:52.681 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:52.681 ===================================================== 00:22:52.681 Controller Capabilities/Features 00:22:52.681 ================================ 00:22:52.681 Vendor ID: 0000 00:22:52.681 Subsystem Vendor ID: 0000 00:22:52.681 Serial Number: 7e66984f42cc3e9f3874 00:22:52.681 Model Number: Linux 00:22:52.681 Firmware Version: 6.8.9-20 00:22:52.681 Recommended Arb Burst: 0 00:22:52.681 IEEE OUI Identifier: 00 00 00 00:22:52.681 Multi-path I/O 00:22:52.681 May have multiple subsystem ports: No 00:22:52.681 May have multiple controllers: No 00:22:52.681 Associated with SR-IOV VF: No 00:22:52.681 Max Data Transfer Size: Unlimited 00:22:52.681 Max Number of Namespaces: 0 00:22:52.681 Max Number of I/O Queues: 1024 00:22:52.681 NVMe Specification Version (VS): 1.3 00:22:52.681 NVMe Specification Version (Identify): 1.3 00:22:52.681 Maximum Queue Entries: 1024 00:22:52.681 Contiguous Queues Required: No 00:22:52.681 Arbitration Mechanisms Supported 00:22:52.681 Weighted Round Robin: Not Supported 00:22:52.681 Vendor Specific: Not Supported 00:22:52.681 Reset Timeout: 7500 ms 00:22:52.681 Doorbell Stride: 4 bytes 00:22:52.681 NVM Subsystem Reset: Not Supported 00:22:52.681 Command Sets Supported 00:22:52.681 NVM Command Set: Supported 00:22:52.681 Boot Partition: Not Supported 00:22:52.681 Memory Page Size Minimum: 4096 bytes 00:22:52.681 Memory Page Size Maximum: 4096 bytes 00:22:52.681 Persistent Memory Region: Not Supported 00:22:52.681 Optional Asynchronous Events Supported 00:22:52.681 Namespace Attribute Notices: Not Supported 00:22:52.681 Firmware Activation Notices: Not Supported 00:22:52.681 ANA Change Notices: Not Supported 00:22:52.681 PLE Aggregate Log Change Notices: Not Supported 00:22:52.681 LBA Status Info Alert Notices: Not Supported 00:22:52.681 EGE Aggregate Log Change Notices: Not Supported 00:22:52.681 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.681 Zone Descriptor Change Notices: Not Supported 00:22:52.681 Discovery Log Change Notices: Supported 00:22:52.681 Controller Attributes 00:22:52.681 128-bit Host Identifier: Not Supported 00:22:52.681 Non-Operational Permissive Mode: Not Supported 00:22:52.681 NVM Sets: Not Supported 00:22:52.681 Read Recovery Levels: Not Supported 00:22:52.681 Endurance Groups: Not Supported 00:22:52.681 Predictable Latency Mode: Not Supported 00:22:52.681 Traffic Based Keep ALive: Not Supported 00:22:52.681 Namespace Granularity: Not Supported 00:22:52.681 SQ Associations: Not Supported 00:22:52.681 UUID List: Not Supported 00:22:52.681 Multi-Domain Subsystem: Not Supported 00:22:52.681 Fixed Capacity Management: Not Supported 00:22:52.681 Variable Capacity Management: Not Supported 00:22:52.681 Delete Endurance Group: Not Supported 00:22:52.681 Delete NVM Set: Not Supported 00:22:52.681 Extended LBA Formats Supported: Not Supported 00:22:52.681 Flexible Data Placement Supported: Not Supported 00:22:52.681 00:22:52.681 Controller Memory Buffer Support 00:22:52.681 ================================ 00:22:52.682 Supported: No 00:22:52.682 00:22:52.682 Persistent Memory Region Support 00:22:52.682 ================================ 00:22:52.682 Supported: No 00:22:52.682 00:22:52.682 Admin Command Set Attributes 00:22:52.682 ============================ 00:22:52.682 Security Send/Receive: Not Supported 00:22:52.682 Format NVM: Not Supported 00:22:52.682 Firmware Activate/Download: Not Supported 00:22:52.682 Namespace Management: Not Supported 00:22:52.682 Device Self-Test: Not Supported 00:22:52.682 Directives: Not Supported 00:22:52.682 NVMe-MI: Not Supported 00:22:52.682 Virtualization Management: Not Supported 00:22:52.682 Doorbell Buffer Config: Not Supported 00:22:52.682 Get LBA Status Capability: Not Supported 00:22:52.682 Command & Feature Lockdown Capability: Not Supported 00:22:52.682 Abort Command Limit: 1 00:22:52.682 Async Event Request Limit: 1 00:22:52.682 Number of Firmware Slots: N/A 00:22:52.682 Firmware Slot 1 Read-Only: N/A 00:22:52.682 Firmware Activation Without Reset: N/A 00:22:52.682 Multiple Update Detection Support: N/A 00:22:52.682 Firmware Update Granularity: No Information Provided 00:22:52.682 Per-Namespace SMART Log: No 00:22:52.682 Asymmetric Namespace Access Log Page: Not Supported 00:22:52.682 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:52.682 Command Effects Log Page: Not Supported 00:22:52.682 Get Log Page Extended Data: Supported 00:22:52.682 Telemetry Log Pages: Not Supported 00:22:52.682 Persistent Event Log Pages: Not Supported 00:22:52.682 Supported Log Pages Log Page: May Support 00:22:52.682 Commands Supported & Effects Log Page: Not Supported 00:22:52.682 Feature Identifiers & Effects Log Page:May Support 00:22:52.682 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.682 Data Area 4 for Telemetry Log: Not Supported 00:22:52.682 Error Log Page Entries Supported: 1 00:22:52.682 Keep Alive: Not Supported 00:22:52.682 00:22:52.682 NVM Command Set Attributes 00:22:52.682 ========================== 00:22:52.682 Submission Queue Entry Size 00:22:52.682 Max: 1 00:22:52.682 Min: 1 00:22:52.682 Completion Queue Entry Size 00:22:52.682 Max: 1 00:22:52.682 Min: 1 00:22:52.682 Number of Namespaces: 0 00:22:52.682 Compare Command: Not Supported 00:22:52.682 Write Uncorrectable Command: Not Supported 00:22:52.682 Dataset Management Command: Not Supported 00:22:52.682 Write Zeroes Command: Not Supported 00:22:52.682 Set Features Save Field: Not Supported 00:22:52.682 Reservations: Not Supported 00:22:52.682 Timestamp: Not Supported 00:22:52.682 Copy: Not Supported 00:22:52.682 Volatile Write Cache: Not Present 00:22:52.682 Atomic Write Unit (Normal): 1 00:22:52.682 Atomic Write Unit (PFail): 1 00:22:52.682 Atomic Compare & Write Unit: 1 00:22:52.682 Fused Compare & Write: Not Supported 00:22:52.682 Scatter-Gather List 00:22:52.682 SGL Command Set: Supported 00:22:52.682 SGL Keyed: Not Supported 00:22:52.682 SGL Bit Bucket Descriptor: Not Supported 00:22:52.682 SGL Metadata Pointer: Not Supported 00:22:52.682 Oversized SGL: Not Supported 00:22:52.682 SGL Metadata Address: Not Supported 00:22:52.682 SGL Offset: Supported 00:22:52.682 Transport SGL Data Block: Not Supported 00:22:52.682 Replay Protected Memory Block: Not Supported 00:22:52.682 00:22:52.682 Firmware Slot Information 00:22:52.682 ========================= 00:22:52.682 Active slot: 0 00:22:52.682 00:22:52.682 00:22:52.682 Error Log 00:22:52.682 ========= 00:22:52.682 00:22:52.682 Active Namespaces 00:22:52.682 ================= 00:22:52.682 Discovery Log Page 00:22:52.682 ================== 00:22:52.682 Generation Counter: 2 00:22:52.682 Number of Records: 2 00:22:52.682 Record Format: 0 00:22:52.682 00:22:52.682 Discovery Log Entry 0 00:22:52.682 ---------------------- 00:22:52.682 Transport Type: 3 (TCP) 00:22:52.682 Address Family: 1 (IPv4) 00:22:52.682 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:52.682 Entry Flags: 00:22:52.682 Duplicate Returned Information: 0 00:22:52.682 Explicit Persistent Connection Support for Discovery: 0 00:22:52.682 Transport Requirements: 00:22:52.682 Secure Channel: Not Specified 00:22:52.682 Port ID: 1 (0x0001) 00:22:52.682 Controller ID: 65535 (0xffff) 00:22:52.682 Admin Max SQ Size: 32 00:22:52.682 Transport Service Identifier: 4420 00:22:52.682 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:52.682 Transport Address: 10.0.0.1 00:22:52.682 Discovery Log Entry 1 00:22:52.682 ---------------------- 00:22:52.682 Transport Type: 3 (TCP) 00:22:52.682 Address Family: 1 (IPv4) 00:22:52.682 Subsystem Type: 2 (NVM Subsystem) 00:22:52.682 Entry Flags: 00:22:52.682 Duplicate Returned Information: 0 00:22:52.682 Explicit Persistent Connection Support for Discovery: 0 00:22:52.682 Transport Requirements: 00:22:52.682 Secure Channel: Not Specified 00:22:52.682 Port ID: 1 (0x0001) 00:22:52.682 Controller ID: 65535 (0xffff) 00:22:52.682 Admin Max SQ Size: 32 00:22:52.682 Transport Service Identifier: 4420 00:22:52.682 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:52.682 Transport Address: 10.0.0.1 00:22:52.682 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:52.682 get_feature(0x01) failed 00:22:52.682 get_feature(0x02) failed 00:22:52.682 get_feature(0x04) failed 00:22:52.682 ===================================================== 00:22:52.682 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:52.682 ===================================================== 00:22:52.682 Controller Capabilities/Features 00:22:52.682 ================================ 00:22:52.682 Vendor ID: 0000 00:22:52.682 Subsystem Vendor ID: 0000 00:22:52.682 Serial Number: 705ba3b0f72e038a02cd 00:22:52.682 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:52.682 Firmware Version: 6.8.9-20 00:22:52.682 Recommended Arb Burst: 6 00:22:52.682 IEEE OUI Identifier: 00 00 00 00:22:52.682 Multi-path I/O 00:22:52.682 May have multiple subsystem ports: Yes 00:22:52.682 May have multiple controllers: Yes 00:22:52.682 Associated with SR-IOV VF: No 00:22:52.682 Max Data Transfer Size: Unlimited 00:22:52.682 Max Number of Namespaces: 1024 00:22:52.682 Max Number of I/O Queues: 128 00:22:52.682 NVMe Specification Version (VS): 1.3 00:22:52.682 NVMe Specification Version (Identify): 1.3 00:22:52.682 Maximum Queue Entries: 1024 00:22:52.682 Contiguous Queues Required: No 00:22:52.683 Arbitration Mechanisms Supported 00:22:52.683 Weighted Round Robin: Not Supported 00:22:52.683 Vendor Specific: Not Supported 00:22:52.683 Reset Timeout: 7500 ms 00:22:52.683 Doorbell Stride: 4 bytes 00:22:52.683 NVM Subsystem Reset: Not Supported 00:22:52.683 Command Sets Supported 00:22:52.683 NVM Command Set: Supported 00:22:52.683 Boot Partition: Not Supported 00:22:52.683 Memory Page Size Minimum: 4096 bytes 00:22:52.683 Memory Page Size Maximum: 4096 bytes 00:22:52.683 Persistent Memory Region: Not Supported 00:22:52.683 Optional Asynchronous Events Supported 00:22:52.683 Namespace Attribute Notices: Supported 00:22:52.683 Firmware Activation Notices: Not Supported 00:22:52.683 ANA Change Notices: Supported 00:22:52.683 PLE Aggregate Log Change Notices: Not Supported 00:22:52.683 LBA Status Info Alert Notices: Not Supported 00:22:52.683 EGE Aggregate Log Change Notices: Not Supported 00:22:52.683 Normal NVM Subsystem Shutdown event: Not Supported 00:22:52.683 Zone Descriptor Change Notices: Not Supported 00:22:52.683 Discovery Log Change Notices: Not Supported 00:22:52.683 Controller Attributes 00:22:52.683 128-bit Host Identifier: Supported 00:22:52.683 Non-Operational Permissive Mode: Not Supported 00:22:52.683 NVM Sets: Not Supported 00:22:52.683 Read Recovery Levels: Not Supported 00:22:52.683 Endurance Groups: Not Supported 00:22:52.683 Predictable Latency Mode: Not Supported 00:22:52.683 Traffic Based Keep ALive: Supported 00:22:52.683 Namespace Granularity: Not Supported 00:22:52.683 SQ Associations: Not Supported 00:22:52.683 UUID List: Not Supported 00:22:52.683 Multi-Domain Subsystem: Not Supported 00:22:52.683 Fixed Capacity Management: Not Supported 00:22:52.683 Variable Capacity Management: Not Supported 00:22:52.683 Delete Endurance Group: Not Supported 00:22:52.683 Delete NVM Set: Not Supported 00:22:52.683 Extended LBA Formats Supported: Not Supported 00:22:52.683 Flexible Data Placement Supported: Not Supported 00:22:52.683 00:22:52.683 Controller Memory Buffer Support 00:22:52.683 ================================ 00:22:52.683 Supported: No 00:22:52.683 00:22:52.683 Persistent Memory Region Support 00:22:52.683 ================================ 00:22:52.683 Supported: No 00:22:52.683 00:22:52.683 Admin Command Set Attributes 00:22:52.683 ============================ 00:22:52.683 Security Send/Receive: Not Supported 00:22:52.683 Format NVM: Not Supported 00:22:52.683 Firmware Activate/Download: Not Supported 00:22:52.683 Namespace Management: Not Supported 00:22:52.683 Device Self-Test: Not Supported 00:22:52.683 Directives: Not Supported 00:22:52.683 NVMe-MI: Not Supported 00:22:52.683 Virtualization Management: Not Supported 00:22:52.683 Doorbell Buffer Config: Not Supported 00:22:52.683 Get LBA Status Capability: Not Supported 00:22:52.683 Command & Feature Lockdown Capability: Not Supported 00:22:52.683 Abort Command Limit: 4 00:22:52.683 Async Event Request Limit: 4 00:22:52.683 Number of Firmware Slots: N/A 00:22:52.683 Firmware Slot 1 Read-Only: N/A 00:22:52.683 Firmware Activation Without Reset: N/A 00:22:52.683 Multiple Update Detection Support: N/A 00:22:52.683 Firmware Update Granularity: No Information Provided 00:22:52.683 Per-Namespace SMART Log: Yes 00:22:52.683 Asymmetric Namespace Access Log Page: Supported 00:22:52.683 ANA Transition Time : 10 sec 00:22:52.683 00:22:52.683 Asymmetric Namespace Access Capabilities 00:22:52.683 ANA Optimized State : Supported 00:22:52.683 ANA Non-Optimized State : Supported 00:22:52.683 ANA Inaccessible State : Supported 00:22:52.683 ANA Persistent Loss State : Supported 00:22:52.683 ANA Change State : Supported 00:22:52.683 ANAGRPID is not changed : No 00:22:52.683 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:52.683 00:22:52.683 ANA Group Identifier Maximum : 128 00:22:52.683 Number of ANA Group Identifiers : 128 00:22:52.683 Max Number of Allowed Namespaces : 1024 00:22:52.683 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:52.683 Command Effects Log Page: Supported 00:22:52.683 Get Log Page Extended Data: Supported 00:22:52.683 Telemetry Log Pages: Not Supported 00:22:52.683 Persistent Event Log Pages: Not Supported 00:22:52.683 Supported Log Pages Log Page: May Support 00:22:52.683 Commands Supported & Effects Log Page: Not Supported 00:22:52.683 Feature Identifiers & Effects Log Page:May Support 00:22:52.683 NVMe-MI Commands & Effects Log Page: May Support 00:22:52.683 Data Area 4 for Telemetry Log: Not Supported 00:22:52.683 Error Log Page Entries Supported: 128 00:22:52.683 Keep Alive: Supported 00:22:52.683 Keep Alive Granularity: 1000 ms 00:22:52.683 00:22:52.683 NVM Command Set Attributes 00:22:52.683 ========================== 00:22:52.683 Submission Queue Entry Size 00:22:52.683 Max: 64 00:22:52.683 Min: 64 00:22:52.683 Completion Queue Entry Size 00:22:52.683 Max: 16 00:22:52.683 Min: 16 00:22:52.683 Number of Namespaces: 1024 00:22:52.683 Compare Command: Not Supported 00:22:52.683 Write Uncorrectable Command: Not Supported 00:22:52.683 Dataset Management Command: Supported 00:22:52.683 Write Zeroes Command: Supported 00:22:52.683 Set Features Save Field: Not Supported 00:22:52.683 Reservations: Not Supported 00:22:52.683 Timestamp: Not Supported 00:22:52.683 Copy: Not Supported 00:22:52.683 Volatile Write Cache: Present 00:22:52.683 Atomic Write Unit (Normal): 1 00:22:52.683 Atomic Write Unit (PFail): 1 00:22:52.683 Atomic Compare & Write Unit: 1 00:22:52.683 Fused Compare & Write: Not Supported 00:22:52.683 Scatter-Gather List 00:22:52.683 SGL Command Set: Supported 00:22:52.683 SGL Keyed: Not Supported 00:22:52.683 SGL Bit Bucket Descriptor: Not Supported 00:22:52.683 SGL Metadata Pointer: Not Supported 00:22:52.683 Oversized SGL: Not Supported 00:22:52.683 SGL Metadata Address: Not Supported 00:22:52.683 SGL Offset: Supported 00:22:52.683 Transport SGL Data Block: Not Supported 00:22:52.683 Replay Protected Memory Block: Not Supported 00:22:52.683 00:22:52.683 Firmware Slot Information 00:22:52.683 ========================= 00:22:52.683 Active slot: 0 00:22:52.683 00:22:52.683 Asymmetric Namespace Access 00:22:52.683 =========================== 00:22:52.683 Change Count : 0 00:22:52.683 Number of ANA Group Descriptors : 1 00:22:52.683 ANA Group Descriptor : 0 00:22:52.683 ANA Group ID : 1 00:22:52.683 Number of NSID Values : 1 00:22:52.684 Change Count : 0 00:22:52.684 ANA State : 1 00:22:52.684 Namespace Identifier : 1 00:22:52.684 00:22:52.684 Commands Supported and Effects 00:22:52.684 ============================== 00:22:52.684 Admin Commands 00:22:52.684 -------------- 00:22:52.684 Get Log Page (02h): Supported 00:22:52.684 Identify (06h): Supported 00:22:52.684 Abort (08h): Supported 00:22:52.684 Set Features (09h): Supported 00:22:52.684 Get Features (0Ah): Supported 00:22:52.684 Asynchronous Event Request (0Ch): Supported 00:22:52.684 Keep Alive (18h): Supported 00:22:52.684 I/O Commands 00:22:52.684 ------------ 00:22:52.684 Flush (00h): Supported 00:22:52.684 Write (01h): Supported LBA-Change 00:22:52.684 Read (02h): Supported 00:22:52.684 Write Zeroes (08h): Supported LBA-Change 00:22:52.684 Dataset Management (09h): Supported 00:22:52.684 00:22:52.684 Error Log 00:22:52.684 ========= 00:22:52.684 Entry: 0 00:22:52.684 Error Count: 0x3 00:22:52.684 Submission Queue Id: 0x0 00:22:52.684 Command Id: 0x5 00:22:52.684 Phase Bit: 0 00:22:52.684 Status Code: 0x2 00:22:52.684 Status Code Type: 0x0 00:22:52.684 Do Not Retry: 1 00:22:52.684 Error Location: 0x28 00:22:52.684 LBA: 0x0 00:22:52.684 Namespace: 0x0 00:22:52.684 Vendor Log Page: 0x0 00:22:52.684 ----------- 00:22:52.684 Entry: 1 00:22:52.684 Error Count: 0x2 00:22:52.684 Submission Queue Id: 0x0 00:22:52.684 Command Id: 0x5 00:22:52.684 Phase Bit: 0 00:22:52.684 Status Code: 0x2 00:22:52.684 Status Code Type: 0x0 00:22:52.684 Do Not Retry: 1 00:22:52.684 Error Location: 0x28 00:22:52.684 LBA: 0x0 00:22:52.684 Namespace: 0x0 00:22:52.684 Vendor Log Page: 0x0 00:22:52.684 ----------- 00:22:52.684 Entry: 2 00:22:52.684 Error Count: 0x1 00:22:52.684 Submission Queue Id: 0x0 00:22:52.684 Command Id: 0x4 00:22:52.684 Phase Bit: 0 00:22:52.684 Status Code: 0x2 00:22:52.684 Status Code Type: 0x0 00:22:52.684 Do Not Retry: 1 00:22:52.684 Error Location: 0x28 00:22:52.684 LBA: 0x0 00:22:52.684 Namespace: 0x0 00:22:52.684 Vendor Log Page: 0x0 00:22:52.684 00:22:52.684 Number of Queues 00:22:52.684 ================ 00:22:52.684 Number of I/O Submission Queues: 128 00:22:52.684 Number of I/O Completion Queues: 128 00:22:52.684 00:22:52.684 ZNS Specific Controller Data 00:22:52.684 ============================ 00:22:52.684 Zone Append Size Limit: 0 00:22:52.684 00:22:52.684 00:22:52.684 Active Namespaces 00:22:52.684 ================= 00:22:52.684 get_feature(0x05) failed 00:22:52.684 Namespace ID:1 00:22:52.684 Command Set Identifier: NVM (00h) 00:22:52.684 Deallocate: Supported 00:22:52.684 Deallocated/Unwritten Error: Not Supported 00:22:52.684 Deallocated Read Value: Unknown 00:22:52.684 Deallocate in Write Zeroes: Not Supported 00:22:52.684 Deallocated Guard Field: 0xFFFF 00:22:52.684 Flush: Supported 00:22:52.684 Reservation: Not Supported 00:22:52.684 Namespace Sharing Capabilities: Multiple Controllers 00:22:52.684 Size (in LBAs): 3750748848 (1788GiB) 00:22:52.684 Capacity (in LBAs): 3750748848 (1788GiB) 00:22:52.684 Utilization (in LBAs): 3750748848 (1788GiB) 00:22:52.684 UUID: 38083d1d-7395-4dbe-8d2a-d1933f89f504 00:22:52.684 Thin Provisioning: Not Supported 00:22:52.684 Per-NS Atomic Units: Yes 00:22:52.684 Atomic Write Unit (Normal): 8 00:22:52.684 Atomic Write Unit (PFail): 8 00:22:52.684 Preferred Write Granularity: 8 00:22:52.684 Atomic Compare & Write Unit: 8 00:22:52.684 Atomic Boundary Size (Normal): 0 00:22:52.684 Atomic Boundary Size (PFail): 0 00:22:52.684 Atomic Boundary Offset: 0 00:22:52.684 NGUID/EUI64 Never Reused: No 00:22:52.684 ANA group ID: 1 00:22:52.684 Namespace Write Protected: No 00:22:52.684 Number of LBA Formats: 1 00:22:52.684 Current LBA Format: LBA Format #00 00:22:52.684 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:52.684 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.684 rmmod nvme_tcp 00:22:52.684 rmmod nvme_fabrics 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.684 19:29:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:22:55.223 19:29:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:22:57.127 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:22:57.127 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:22:57.127 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:22:57.127 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:22:57.127 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:22:57.127 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:22:57.127 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:22:57.127 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:22:57.127 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:22:57.387 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:22:57.387 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:22:57.387 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:22:57.387 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:22:57.387 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:22:57.387 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:22:57.387 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:22:59.294 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:22:59.294 00:22:59.294 real 0m16.212s 00:22:59.294 user 0m3.461s 00:22:59.294 sys 0m7.999s 00:22:59.294 19:29:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.294 19:29:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.294 ************************************ 00:22:59.294 END TEST nvmf_identify_kernel_target 00:22:59.294 ************************************ 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.555 ************************************ 00:22:59.555 START TEST nvmf_auth_host 00:22:59.555 ************************************ 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:59.555 * Looking for test storage... 00:22:59.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:59.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.555 --rc genhtml_branch_coverage=1 00:22:59.555 --rc genhtml_function_coverage=1 00:22:59.555 --rc genhtml_legend=1 00:22:59.555 --rc geninfo_all_blocks=1 00:22:59.555 --rc geninfo_unexecuted_blocks=1 00:22:59.555 00:22:59.555 ' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:59.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.555 --rc genhtml_branch_coverage=1 00:22:59.555 --rc genhtml_function_coverage=1 00:22:59.555 --rc genhtml_legend=1 00:22:59.555 --rc geninfo_all_blocks=1 00:22:59.555 --rc geninfo_unexecuted_blocks=1 00:22:59.555 00:22:59.555 ' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:59.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.555 --rc genhtml_branch_coverage=1 00:22:59.555 --rc genhtml_function_coverage=1 00:22:59.555 --rc genhtml_legend=1 00:22:59.555 --rc geninfo_all_blocks=1 00:22:59.555 --rc geninfo_unexecuted_blocks=1 00:22:59.555 00:22:59.555 ' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:59.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.555 --rc genhtml_branch_coverage=1 00:22:59.555 --rc genhtml_function_coverage=1 00:22:59.555 --rc genhtml_legend=1 00:22:59.555 --rc geninfo_all_blocks=1 00:22:59.555 --rc geninfo_unexecuted_blocks=1 00:22:59.555 00:22:59.555 ' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.555 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.556 19:29:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.839 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:04.840 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:04.840 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:04.840 Found net devices under 0000:31:00.0: cvl_0_0 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:04.840 Found net devices under 0000:31:00.1: cvl_0_1 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:04.840 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:23:05.101 00:23:05.101 --- 10.0.0.2 ping statistics --- 00:23:05.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.101 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:23:05.101 00:23:05.101 --- 10.0.0.1 ping statistics --- 00:23:05.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.101 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3867756 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3867756 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3867756 ']' 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.101 19:29:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9cdac4bd4f3c5f1aaa1df1a14450599f 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wV7 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9cdac4bd4f3c5f1aaa1df1a14450599f 0 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9cdac4bd4f3c5f1aaa1df1a14450599f 0 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9cdac4bd4f3c5f1aaa1df1a14450599f 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wV7 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wV7 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.wV7 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.044 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4c286e9547f59e79603d1d8ac6d16d96055c58aac0845d13e8f912f2882afaa3 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.POU 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4c286e9547f59e79603d1d8ac6d16d96055c58aac0845d13e8f912f2882afaa3 3 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4c286e9547f59e79603d1d8ac6d16d96055c58aac0845d13e8f912f2882afaa3 3 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4c286e9547f59e79603d1d8ac6d16d96055c58aac0845d13e8f912f2882afaa3 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.POU 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.POU 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.POU 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3ee35715c2ad44b63e9abab4a77d8b4cc738092882cb43f0 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.AmA 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3ee35715c2ad44b63e9abab4a77d8b4cc738092882cb43f0 0 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3ee35715c2ad44b63e9abab4a77d8b4cc738092882cb43f0 0 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3ee35715c2ad44b63e9abab4a77d8b4cc738092882cb43f0 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.AmA 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.AmA 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.AmA 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=86d0097012a43261394b6fd336ab84dc76f0cf1acd16f513 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.AWK 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 86d0097012a43261394b6fd336ab84dc76f0cf1acd16f513 2 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 86d0097012a43261394b6fd336ab84dc76f0cf1acd16f513 2 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=86d0097012a43261394b6fd336ab84dc76f0cf1acd16f513 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.AWK 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.AWK 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.AWK 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9fd45b374bc35922b976d4ca87cb486e 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.72I 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9fd45b374bc35922b976d4ca87cb486e 1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9fd45b374bc35922b976d4ca87cb486e 1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9fd45b374bc35922b976d4ca87cb486e 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.72I 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.72I 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.72I 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5e7da428a825a2774c11cf2094270510 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.u0e 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5e7da428a825a2774c11cf2094270510 1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5e7da428a825a2774c11cf2094270510 1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5e7da428a825a2774c11cf2094270510 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:23:06.045 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.u0e 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.u0e 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.u0e 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3dbcfe3ddd542959def92c3f72d06398c144306faa571d5f 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oI5 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3dbcfe3ddd542959def92c3f72d06398c144306faa571d5f 2 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3dbcfe3ddd542959def92c3f72d06398c144306faa571d5f 2 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3dbcfe3ddd542959def92c3f72d06398c144306faa571d5f 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oI5 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oI5 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oI5 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5e4c2b40803ecc873637c826a43ce35c 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.XOQ 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5e4c2b40803ecc873637c826a43ce35c 0 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5e4c2b40803ecc873637c826a43ce35c 0 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5e4c2b40803ecc873637c826a43ce35c 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.XOQ 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.XOQ 00:23:06.307 19:29:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.XOQ 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=55105bfdd4efca06ab786fad5d1c31114e4915513e6a7b1d8a3b95c9eb8abd71 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.9BY 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 55105bfdd4efca06ab786fad5d1c31114e4915513e6a7b1d8a3b95c9eb8abd71 3 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 55105bfdd4efca06ab786fad5d1c31114e4915513e6a7b1d8a3b95c9eb8abd71 3 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=55105bfdd4efca06ab786fad5d1c31114e4915513e6a7b1d8a3b95c9eb8abd71 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.9BY 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.9BY 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.9BY 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3867756 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3867756 ']' 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.307 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wV7 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.POU ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.POU 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.AmA 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.AWK ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AWK 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.72I 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.u0e ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u0e 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oI5 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.XOQ ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.XOQ 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.9BY 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:06.568 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:06.569 19:29:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:09.109 Waiting for block devices as requested 00:23:09.109 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:09.109 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:09.109 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:09.109 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:09.109 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:09.109 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:09.109 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:09.109 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:09.369 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:23:09.369 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:09.369 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:09.630 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:09.630 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:09.630 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:09.630 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:09.889 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:09.889 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:10.459 No valid GPT data, bailing 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:23:10.459 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:23:10.719 00:23:10.719 Discovery Log Number of Records 2, Generation counter 2 00:23:10.719 =====Discovery Log Entry 0====== 00:23:10.719 trtype: tcp 00:23:10.719 adrfam: ipv4 00:23:10.719 subtype: current discovery subsystem 00:23:10.719 treq: not specified, sq flow control disable supported 00:23:10.719 portid: 1 00:23:10.719 trsvcid: 4420 00:23:10.719 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:10.719 traddr: 10.0.0.1 00:23:10.719 eflags: none 00:23:10.719 sectype: none 00:23:10.719 =====Discovery Log Entry 1====== 00:23:10.719 trtype: tcp 00:23:10.719 adrfam: ipv4 00:23:10.719 subtype: nvme subsystem 00:23:10.719 treq: not specified, sq flow control disable supported 00:23:10.719 portid: 1 00:23:10.719 trsvcid: 4420 00:23:10.719 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:10.719 traddr: 10.0.0.1 00:23:10.719 eflags: none 00:23:10.719 sectype: none 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.719 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.720 nvme0n1 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.720 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:10.980 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.981 nvme0n1 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.981 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.241 nvme0n1 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.241 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.242 19:29:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.530 nvme0n1 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:11.530 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.531 nvme0n1 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.531 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.791 nvme0n1 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.791 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.050 nvme0n1 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.050 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.309 nvme0n1 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.309 19:29:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.309 nvme0n1 00:23:12.309 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.309 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.309 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.310 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.310 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.310 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.310 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.310 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.310 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.310 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:12.568 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.569 nvme0n1 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.569 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.828 nvme0n1 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.828 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.087 nvme0n1 00:23:13.087 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.087 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.087 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.087 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.087 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.087 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.088 19:29:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.347 nvme0n1 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.347 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.605 nvme0n1 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:13.605 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.606 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.864 nvme0n1 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.864 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.865 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.123 nvme0n1 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:14.123 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.124 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.382 19:29:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.682 nvme0n1 00:23:14.682 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.682 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.682 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.683 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.013 nvme0n1 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.013 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.014 19:29:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.596 nvme0n1 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.596 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.855 nvme0n1 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.855 19:29:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.425 nvme0n1 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.425 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.995 nvme0n1 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.995 19:29:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.564 nvme0n1 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.564 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.132 nvme0n1 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.132 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.133 19:29:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.702 nvme0n1 00:23:18.702 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.702 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.702 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.702 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.702 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.963 19:29:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.532 nvme0n1 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.532 nvme0n1 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.532 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.792 nvme0n1 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.792 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.051 nvme0n1 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.051 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 nvme0n1 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.311 19:29:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 nvme0n1 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.311 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.570 nvme0n1 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.570 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.571 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.830 nvme0n1 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.830 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.089 nvme0n1 00:23:21.089 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.089 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.090 19:29:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.349 nvme0n1 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.349 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.609 nvme0n1 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.609 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.870 nvme0n1 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:21.870 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.871 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.131 nvme0n1 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.131 19:29:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.391 nvme0n1 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:22.391 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.392 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.651 nvme0n1 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.651 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.652 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.910 nvme0n1 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.910 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.911 19:29:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.169 nvme0n1 00:23:23.169 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.169 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.169 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.169 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.169 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.430 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.690 nvme0n1 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:23.690 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.691 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.261 nvme0n1 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.261 19:29:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.521 nvme0n1 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.521 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.522 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.091 nvme0n1 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.091 19:29:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.661 nvme0n1 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.661 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.662 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.231 nvme0n1 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.231 19:29:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.231 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.231 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.231 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.800 nvme0n1 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.800 19:30:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.737 nvme0n1 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.737 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.305 nvme0n1 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.305 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.306 19:30:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.306 nvme0n1 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.306 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.565 nvme0n1 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.565 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.824 nvme0n1 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.824 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.825 nvme0n1 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.825 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.083 nvme0n1 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.083 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.084 19:30:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.343 nvme0n1 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.343 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.602 nvme0n1 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.602 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.861 nvme0n1 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.861 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.862 nvme0n1 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.862 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.121 nvme0n1 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.121 19:30:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.380 nvme0n1 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.380 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.639 nvme0n1 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.639 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.899 nvme0n1 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.899 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.159 19:30:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.419 nvme0n1 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.419 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.679 nvme0n1 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.679 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.680 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.939 nvme0n1 00:23:31.939 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.939 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.939 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.939 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.939 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.940 19:30:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.507 nvme0n1 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.507 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.508 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.768 nvme0n1 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.768 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.337 nvme0n1 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.337 19:30:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.338 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.597 nvme0n1 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWNkYWM0YmQ0ZjNjNWYxYWFhMWRmMWExNDQ1MDU5OWaSyreW: 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: ]] 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGMyODZlOTU0N2Y1OWU3OTYwM2QxZDhhYzZkMTZkOTYwNTVjNThhYWMwODQ1ZDEzZThmOTEyZjI4ODJhZmFhM33/MWY=: 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.597 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:33.598 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.598 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:33.598 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:33.598 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:33.598 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.598 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.598 19:30:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.535 nvme0n1 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.535 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.103 nvme0n1 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.103 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.104 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.104 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.104 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.104 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.104 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.104 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.104 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.104 19:30:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.670 nvme0n1 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2RiY2ZlM2RkZDU0Mjk1OWRlZjkyYzNmNzJkMDYzOThjMTQ0MzA2ZmFhNTcxZDVmDvHnfw==: 00:23:35.670 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: ]] 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWU0YzJiNDA4MDNlY2M4NzM2MzdjODI2YTQzY2UzNWPt1+hY: 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.671 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.246 nvme0n1 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.246 19:30:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NTUxMDViZmRkNGVmY2EwNmFiNzg2ZmFkNWQxYzMxMTE0ZTQ5MTU1MTNlNmE3YjFkOGEzYjk1YzllYjhhYmQ3MRQvwcQ=: 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.246 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.827 nvme0n1 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.827 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.828 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.087 request: 00:23:37.087 { 00:23:37.087 "name": "nvme0", 00:23:37.087 "trtype": "tcp", 00:23:37.087 "traddr": "10.0.0.1", 00:23:37.087 "adrfam": "ipv4", 00:23:37.087 "trsvcid": "4420", 00:23:37.087 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:37.087 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:37.087 "prchk_reftag": false, 00:23:37.087 "prchk_guard": false, 00:23:37.087 "hdgst": false, 00:23:37.087 "ddgst": false, 00:23:37.087 "allow_unrecognized_csi": false, 00:23:37.087 "method": "bdev_nvme_attach_controller", 00:23:37.087 "req_id": 1 00:23:37.087 } 00:23:37.087 Got JSON-RPC error response 00:23:37.087 response: 00:23:37.087 { 00:23:37.087 "code": -5, 00:23:37.087 "message": "Input/output error" 00:23:37.087 } 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.087 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.087 request: 00:23:37.087 { 00:23:37.087 "name": "nvme0", 00:23:37.087 "trtype": "tcp", 00:23:37.087 "traddr": "10.0.0.1", 00:23:37.087 "adrfam": "ipv4", 00:23:37.087 "trsvcid": "4420", 00:23:37.087 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:37.087 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:37.087 "prchk_reftag": false, 00:23:37.087 "prchk_guard": false, 00:23:37.087 "hdgst": false, 00:23:37.087 "ddgst": false, 00:23:37.087 "dhchap_key": "key2", 00:23:37.088 "allow_unrecognized_csi": false, 00:23:37.088 "method": "bdev_nvme_attach_controller", 00:23:37.088 "req_id": 1 00:23:37.088 } 00:23:37.088 Got JSON-RPC error response 00:23:37.088 response: 00:23:37.088 { 00:23:37.088 "code": -5, 00:23:37.088 "message": "Input/output error" 00:23:37.088 } 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.088 request: 00:23:37.088 { 00:23:37.088 "name": "nvme0", 00:23:37.088 "trtype": "tcp", 00:23:37.088 "traddr": "10.0.0.1", 00:23:37.088 "adrfam": "ipv4", 00:23:37.088 "trsvcid": "4420", 00:23:37.088 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:37.088 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:37.088 "prchk_reftag": false, 00:23:37.088 "prchk_guard": false, 00:23:37.088 "hdgst": false, 00:23:37.088 "ddgst": false, 00:23:37.088 "dhchap_key": "key1", 00:23:37.088 "dhchap_ctrlr_key": "ckey2", 00:23:37.088 "allow_unrecognized_csi": false, 00:23:37.088 "method": "bdev_nvme_attach_controller", 00:23:37.088 "req_id": 1 00:23:37.088 } 00:23:37.088 Got JSON-RPC error response 00:23:37.088 response: 00:23:37.088 { 00:23:37.088 "code": -5, 00:23:37.088 "message": "Input/output error" 00:23:37.088 } 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.088 19:30:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.347 nvme0n1 00:23:37.347 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.347 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:37.347 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.347 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.347 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.347 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.347 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.348 request: 00:23:37.348 { 00:23:37.348 "name": "nvme0", 00:23:37.348 "dhchap_key": "key1", 00:23:37.348 "dhchap_ctrlr_key": "ckey2", 00:23:37.348 "method": "bdev_nvme_set_keys", 00:23:37.348 "req_id": 1 00:23:37.348 } 00:23:37.348 Got JSON-RPC error response 00:23:37.348 response: 00:23:37.348 { 00:23:37.348 "code": -13, 00:23:37.348 "message": "Permission denied" 00:23:37.348 } 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:37.348 19:30:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:38.727 19:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.727 19:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:38.727 19:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.727 19:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.727 19:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.727 19:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:23:38.727 19:30:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2VlMzU3MTVjMmFkNDRiNjNlOWFiYWI0YTc3ZDhiNGNjNzM4MDkyODgyY2I0M2YwSI8IqA==: 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODZkMDA5NzAxMmE0MzI2MTM5NGI2ZmQzMzZhYjg0ZGM3NmYwY2YxYWNkMTZmNTEz/RUhgw==: 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.660 nvme0n1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OWZkNDViMzc0YmMzNTkyMmI5NzZkNGNhODdjYjQ4NmUTvhpy: 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWU3ZGE0MjhhODI1YTI3NzRjMTFjZjIwOTQyNzA1MTCrenpo: 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.660 request: 00:23:39.660 { 00:23:39.660 "name": "nvme0", 00:23:39.660 "dhchap_key": "key2", 00:23:39.660 "dhchap_ctrlr_key": "ckey1", 00:23:39.660 "method": "bdev_nvme_set_keys", 00:23:39.660 "req_id": 1 00:23:39.660 } 00:23:39.660 Got JSON-RPC error response 00:23:39.660 response: 00:23:39.660 { 00:23:39.660 "code": -13, 00:23:39.660 "message": "Permission denied" 00:23:39.660 } 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:23:39.660 19:30:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.032 rmmod nvme_tcp 00:23:41.032 rmmod nvme_fabrics 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3867756 ']' 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3867756 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3867756 ']' 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3867756 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3867756 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3867756' 00:23:41.032 killing process with pid 3867756 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3867756 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3867756 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.032 19:30:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:42.937 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:43.196 19:30:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:45.732 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:45.732 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:45.991 19:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.wV7 /tmp/spdk.key-null.AmA /tmp/spdk.key-sha256.72I /tmp/spdk.key-sha384.oI5 /tmp/spdk.key-sha512.9BY /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:45.991 19:30:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:48.559 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:23:48.559 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:23:48.559 00:23:48.559 real 0m49.030s 00:23:48.559 user 0m42.927s 00:23:48.559 sys 0m11.407s 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.559 ************************************ 00:23:48.559 END TEST nvmf_auth_host 00:23:48.559 ************************************ 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.559 ************************************ 00:23:48.559 START TEST nvmf_digest 00:23:48.559 ************************************ 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:48.559 * Looking for test storage... 00:23:48.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:48.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.559 --rc genhtml_branch_coverage=1 00:23:48.559 --rc genhtml_function_coverage=1 00:23:48.559 --rc genhtml_legend=1 00:23:48.559 --rc geninfo_all_blocks=1 00:23:48.559 --rc geninfo_unexecuted_blocks=1 00:23:48.559 00:23:48.559 ' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:48.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.559 --rc genhtml_branch_coverage=1 00:23:48.559 --rc genhtml_function_coverage=1 00:23:48.559 --rc genhtml_legend=1 00:23:48.559 --rc geninfo_all_blocks=1 00:23:48.559 --rc geninfo_unexecuted_blocks=1 00:23:48.559 00:23:48.559 ' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:48.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.559 --rc genhtml_branch_coverage=1 00:23:48.559 --rc genhtml_function_coverage=1 00:23:48.559 --rc genhtml_legend=1 00:23:48.559 --rc geninfo_all_blocks=1 00:23:48.559 --rc geninfo_unexecuted_blocks=1 00:23:48.559 00:23:48.559 ' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:48.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.559 --rc genhtml_branch_coverage=1 00:23:48.559 --rc genhtml_function_coverage=1 00:23:48.559 --rc genhtml_legend=1 00:23:48.559 --rc geninfo_all_blocks=1 00:23:48.559 --rc geninfo_unexecuted_blocks=1 00:23:48.559 00:23:48.559 ' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.559 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:48.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:23:48.560 19:30:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:53.833 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:53.833 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:53.833 Found net devices under 0000:31:00.0: cvl_0_0 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:53.833 Found net devices under 0000:31:00.1: cvl_0_1 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.833 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:54.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:54.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:23:54.093 00:23:54.093 --- 10.0.0.2 ping statistics --- 00:23:54.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.093 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:54.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:54.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:23:54.093 00:23:54.093 --- 10.0.0.1 ping statistics --- 00:23:54.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:54.093 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:54.093 ************************************ 00:23:54.093 START TEST nvmf_digest_clean 00:23:54.093 ************************************ 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3885249 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3885249 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3885249 ']' 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:54.093 19:30:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:54.352 [2024-11-26 19:30:27.961894] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:23:54.352 [2024-11-26 19:30:27.961942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:54.352 [2024-11-26 19:30:28.046460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.352 [2024-11-26 19:30:28.081579] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:54.352 [2024-11-26 19:30:28.081612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:54.352 [2024-11-26 19:30:28.081620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:54.352 [2024-11-26 19:30:28.081627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:54.352 [2024-11-26 19:30:28.081633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:54.352 [2024-11-26 19:30:28.082247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.920 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:55.179 null0 00:23:55.179 [2024-11-26 19:30:28.846158] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.179 [2024-11-26 19:30:28.870362] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3885291 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3885291 /var/tmp/bperf.sock 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3885291 ']' 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:55.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:55.179 19:30:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:55.179 [2024-11-26 19:30:28.910402] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:23:55.179 [2024-11-26 19:30:28.910459] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885291 ] 00:23:55.179 [2024-11-26 19:30:28.996195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.441 [2024-11-26 19:30:29.049597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.012 19:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:56.012 19:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:23:56.012 19:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:56.012 19:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:56.012 19:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:56.270 19:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.270 19:30:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:56.528 nvme0n1 00:23:56.528 19:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:56.528 19:30:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:56.528 Running I/O for 2 seconds... 00:23:58.966 22643.00 IOPS, 88.45 MiB/s [2024-11-26T18:30:32.831Z] 25004.50 IOPS, 97.67 MiB/s 00:23:58.966 Latency(us) 00:23:58.966 [2024-11-26T18:30:32.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.966 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:58.966 nvme0n1 : 2.00 25027.11 97.76 0.00 0.00 5109.13 2034.35 17913.17 00:23:58.966 [2024-11-26T18:30:32.831Z] =================================================================================================================== 00:23:58.966 [2024-11-26T18:30:32.831Z] Total : 25027.11 97.76 0.00 0.00 5109.13 2034.35 17913.17 00:23:58.966 { 00:23:58.966 "results": [ 00:23:58.966 { 00:23:58.966 "job": "nvme0n1", 00:23:58.966 "core_mask": "0x2", 00:23:58.966 "workload": "randread", 00:23:58.966 "status": "finished", 00:23:58.966 "queue_depth": 128, 00:23:58.966 "io_size": 4096, 00:23:58.966 "runtime": 2.003308, 00:23:58.966 "iops": 25027.105168052043, 00:23:58.966 "mibps": 97.7621295627033, 00:23:58.966 "io_failed": 0, 00:23:58.966 "io_timeout": 0, 00:23:58.966 "avg_latency_us": 5109.128829141486, 00:23:58.966 "min_latency_us": 2034.3466666666666, 00:23:58.966 "max_latency_us": 17913.173333333332 00:23:58.966 } 00:23:58.966 ], 00:23:58.966 "core_count": 1 00:23:58.966 } 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:58.966 | select(.opcode=="crc32c") 00:23:58.966 | "\(.module_name) \(.executed)"' 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3885291 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3885291 ']' 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3885291 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885291 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885291' 00:23:58.966 killing process with pid 3885291 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3885291 00:23:58.966 Received shutdown signal, test time was about 2.000000 seconds 00:23:58.966 00:23:58.966 Latency(us) 00:23:58.966 [2024-11-26T18:30:32.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:58.966 [2024-11-26T18:30:32.831Z] =================================================================================================================== 00:23:58.966 [2024-11-26T18:30:32.831Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3885291 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3886287 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3886287 /var/tmp/bperf.sock 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3886287 ']' 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:58.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:58.966 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:58.966 [2024-11-26 19:30:32.713087] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:23:58.966 [2024-11-26 19:30:32.713153] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886287 ] 00:23:58.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:58.966 Zero copy mechanism will not be used. 00:23:58.966 [2024-11-26 19:30:32.777006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.226 [2024-11-26 19:30:32.806564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.226 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:59.226 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:23:59.226 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:59.226 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:59.226 19:30:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:59.226 19:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:59.226 19:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:59.794 nvme0n1 00:23:59.794 19:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:59.794 19:30:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:59.794 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:59.794 Zero copy mechanism will not be used. 00:23:59.794 Running I/O for 2 seconds... 00:24:01.677 3623.00 IOPS, 452.88 MiB/s [2024-11-26T18:30:35.542Z] 3773.50 IOPS, 471.69 MiB/s 00:24:01.677 Latency(us) 00:24:01.677 [2024-11-26T18:30:35.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.677 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:01.677 nvme0n1 : 2.00 3773.57 471.70 0.00 0.00 4237.32 532.48 12014.93 00:24:01.677 [2024-11-26T18:30:35.542Z] =================================================================================================================== 00:24:01.677 [2024-11-26T18:30:35.542Z] Total : 3773.57 471.70 0.00 0.00 4237.32 532.48 12014.93 00:24:01.677 { 00:24:01.677 "results": [ 00:24:01.677 { 00:24:01.677 "job": "nvme0n1", 00:24:01.677 "core_mask": "0x2", 00:24:01.677 "workload": "randread", 00:24:01.677 "status": "finished", 00:24:01.677 "queue_depth": 16, 00:24:01.677 "io_size": 131072, 00:24:01.677 "runtime": 2.004201, 00:24:01.677 "iops": 3773.573608635062, 00:24:01.677 "mibps": 471.69670107938276, 00:24:01.677 "io_failed": 0, 00:24:01.677 "io_timeout": 0, 00:24:01.677 "avg_latency_us": 4237.315520296179, 00:24:01.677 "min_latency_us": 532.48, 00:24:01.677 "max_latency_us": 12014.933333333332 00:24:01.677 } 00:24:01.677 ], 00:24:01.677 "core_count": 1 00:24:01.677 } 00:24:01.677 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:01.677 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:01.677 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:01.677 | select(.opcode=="crc32c") 00:24:01.677 | "\(.module_name) \(.executed)"' 00:24:01.677 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:01.677 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3886287 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3886287 ']' 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3886287 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3886287 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3886287' 00:24:01.936 killing process with pid 3886287 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3886287 00:24:01.936 Received shutdown signal, test time was about 2.000000 seconds 00:24:01.936 00:24:01.936 Latency(us) 00:24:01.936 [2024-11-26T18:30:35.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.936 [2024-11-26T18:30:35.801Z] =================================================================================================================== 00:24:01.936 [2024-11-26T18:30:35.801Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3886287 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3886965 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3886965 /var/tmp/bperf.sock 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3886965 ']' 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.936 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:02.195 [2024-11-26 19:30:35.822474] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:02.195 [2024-11-26 19:30:35.822527] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886965 ] 00:24:02.195 [2024-11-26 19:30:35.886305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.195 [2024-11-26 19:30:35.915514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.195 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.195 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:02.195 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:02.195 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:02.195 19:30:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:02.454 19:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.454 19:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.713 nvme0n1 00:24:02.714 19:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:02.714 19:30:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.714 Running I/O for 2 seconds... 00:24:05.028 30082.00 IOPS, 117.51 MiB/s [2024-11-26T18:30:38.893Z] 30242.00 IOPS, 118.13 MiB/s 00:24:05.028 Latency(us) 00:24:05.028 [2024-11-26T18:30:38.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.028 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:05.028 nvme0n1 : 2.00 30247.97 118.16 0.00 0.00 4226.77 2007.04 12724.91 00:24:05.028 [2024-11-26T18:30:38.893Z] =================================================================================================================== 00:24:05.028 [2024-11-26T18:30:38.893Z] Total : 30247.97 118.16 0.00 0.00 4226.77 2007.04 12724.91 00:24:05.028 { 00:24:05.028 "results": [ 00:24:05.028 { 00:24:05.028 "job": "nvme0n1", 00:24:05.028 "core_mask": "0x2", 00:24:05.028 "workload": "randwrite", 00:24:05.028 "status": "finished", 00:24:05.028 "queue_depth": 128, 00:24:05.028 "io_size": 4096, 00:24:05.028 "runtime": 2.003837, 00:24:05.028 "iops": 30247.969270953676, 00:24:05.028 "mibps": 118.1561299646628, 00:24:05.028 "io_failed": 0, 00:24:05.028 "io_timeout": 0, 00:24:05.028 "avg_latency_us": 4226.767744121076, 00:24:05.028 "min_latency_us": 2007.04, 00:24:05.028 "max_latency_us": 12724.906666666666 00:24:05.028 } 00:24:05.028 ], 00:24:05.028 "core_count": 1 00:24:05.028 } 00:24:05.028 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:05.028 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:05.028 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:05.028 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:05.028 | select(.opcode=="crc32c") 00:24:05.028 | "\(.module_name) \(.executed)"' 00:24:05.028 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:05.028 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3886965 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3886965 ']' 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3886965 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3886965 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3886965' 00:24:05.029 killing process with pid 3886965 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3886965 00:24:05.029 Received shutdown signal, test time was about 2.000000 seconds 00:24:05.029 00:24:05.029 Latency(us) 00:24:05.029 [2024-11-26T18:30:38.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.029 [2024-11-26T18:30:38.894Z] =================================================================================================================== 00:24:05.029 [2024-11-26T18:30:38.894Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.029 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3886965 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3887643 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3887643 /var/tmp/bperf.sock 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3887643 ']' 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:05.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:05.289 19:30:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:05.289 [2024-11-26 19:30:38.936244] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:05.289 [2024-11-26 19:30:38.936298] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887643 ] 00:24:05.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:05.289 Zero copy mechanism will not be used. 00:24:05.289 [2024-11-26 19:30:39.001353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.289 [2024-11-26 19:30:39.030696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.289 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.289 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:05.289 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:05.289 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:05.289 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:05.556 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.556 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.131 nvme0n1 00:24:06.131 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:06.131 19:30:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:06.131 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:06.131 Zero copy mechanism will not be used. 00:24:06.131 Running I/O for 2 seconds... 00:24:08.046 3815.00 IOPS, 476.88 MiB/s [2024-11-26T18:30:41.911Z] 3893.50 IOPS, 486.69 MiB/s 00:24:08.046 Latency(us) 00:24:08.046 [2024-11-26T18:30:41.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.046 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:08.046 nvme0n1 : 2.00 3896.10 487.01 0.00 0.00 4102.15 1126.40 7591.25 00:24:08.046 [2024-11-26T18:30:41.911Z] =================================================================================================================== 00:24:08.046 [2024-11-26T18:30:41.911Z] Total : 3896.10 487.01 0.00 0.00 4102.15 1126.40 7591.25 00:24:08.046 { 00:24:08.046 "results": [ 00:24:08.046 { 00:24:08.046 "job": "nvme0n1", 00:24:08.046 "core_mask": "0x2", 00:24:08.046 "workload": "randwrite", 00:24:08.046 "status": "finished", 00:24:08.046 "queue_depth": 16, 00:24:08.046 "io_size": 131072, 00:24:08.046 "runtime": 2.003798, 00:24:08.046 "iops": 3896.101303624417, 00:24:08.046 "mibps": 487.01266295305214, 00:24:08.046 "io_failed": 0, 00:24:08.046 "io_timeout": 0, 00:24:08.046 "avg_latency_us": 4102.148984244908, 00:24:08.046 "min_latency_us": 1126.4, 00:24:08.046 "max_latency_us": 7591.253333333333 00:24:08.046 } 00:24:08.046 ], 00:24:08.046 "core_count": 1 00:24:08.046 } 00:24:08.046 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:08.046 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:08.046 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:08.046 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:08.046 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:08.046 | select(.opcode=="crc32c") 00:24:08.046 | "\(.module_name) \(.executed)"' 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3887643 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3887643 ']' 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3887643 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.305 19:30:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3887643 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3887643' 00:24:08.305 killing process with pid 3887643 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3887643 00:24:08.305 Received shutdown signal, test time was about 2.000000 seconds 00:24:08.305 00:24:08.305 Latency(us) 00:24:08.305 [2024-11-26T18:30:42.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.305 [2024-11-26T18:30:42.170Z] =================================================================================================================== 00:24:08.305 [2024-11-26T18:30:42.170Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3887643 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3885249 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3885249 ']' 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3885249 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3885249 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3885249' 00:24:08.305 killing process with pid 3885249 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3885249 00:24:08.305 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3885249 00:24:08.565 00:24:08.565 real 0m14.342s 00:24:08.565 user 0m28.164s 00:24:08.565 sys 0m2.810s 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.565 ************************************ 00:24:08.565 END TEST nvmf_digest_clean 00:24:08.565 ************************************ 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:08.565 ************************************ 00:24:08.565 START TEST nvmf_digest_error 00:24:08.565 ************************************ 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3888350 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3888350 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3888350 ']' 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.565 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:08.565 [2024-11-26 19:30:42.353666] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:08.565 [2024-11-26 19:30:42.353713] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.565 [2024-11-26 19:30:42.424933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.825 [2024-11-26 19:30:42.453011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.825 [2024-11-26 19:30:42.453037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.825 [2024-11-26 19:30:42.453043] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.825 [2024-11-26 19:30:42.453048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.825 [2024-11-26 19:30:42.453052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.825 [2024-11-26 19:30:42.453540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.825 [2024-11-26 19:30:42.509887] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.825 null0 00:24:08.825 [2024-11-26 19:30:42.584848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.825 [2024-11-26 19:30:42.609045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3888371 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3888371 /var/tmp/bperf.sock 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3888371 ']' 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:08.825 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:08.825 [2024-11-26 19:30:42.647607] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:08.825 [2024-11-26 19:30:42.647654] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888371 ] 00:24:09.084 [2024-11-26 19:30:42.712217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.084 [2024-11-26 19:30:42.742197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.084 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.084 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:09.084 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.084 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:09.344 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:09.344 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.344 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.344 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.344 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.344 19:30:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.603 nvme0n1 00:24:09.603 19:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:09.603 19:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.603 19:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:09.603 19:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.603 19:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:09.603 19:30:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:09.603 Running I/O for 2 seconds... 00:24:09.603 [2024-11-26 19:30:43.381381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.381412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.381420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.603 [2024-11-26 19:30:43.392131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.392151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.392158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.603 [2024-11-26 19:30:43.403189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.403206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.403213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.603 [2024-11-26 19:30:43.411636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.411653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.411660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.603 [2024-11-26 19:30:43.420459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.420476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.420482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.603 [2024-11-26 19:30:43.429820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.429839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.429846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.603 [2024-11-26 19:30:43.439547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.439565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.439573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.603 [2024-11-26 19:30:43.448466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.448483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.448490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.603 [2024-11-26 19:30:43.458275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.603 [2024-11-26 19:30:43.458291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-26 19:30:43.458298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.469686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.469704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.469711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.480016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.480033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.480040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.488896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.488912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.488918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.497595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.497612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.497618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.507301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.507318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.507324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.515779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.515796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.515803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.524797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.524815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.524821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.533333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.533350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.533363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.542708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.542724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.542730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.551586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.551602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.551608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.561167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.561184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.561190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.570016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.570034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.570040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.579745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.579762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.579768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.587836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.587853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.587859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.596222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.865 [2024-11-26 19:30:43.596239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.865 [2024-11-26 19:30:43.596246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.865 [2024-11-26 19:30:43.605414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.605431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.605438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.614439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.614459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.614466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.623992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.624008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.624014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.635122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.635139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.635146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.647265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.647282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.647288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.658128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.658145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.658152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.666387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.666404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.666410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.678215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.678232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.678238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.687836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.687852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.687859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.696355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.696372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.696381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.705209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.705226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.705232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.713409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.713426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.713432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:09.866 [2024-11-26 19:30:43.722769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:09.866 [2024-11-26 19:30:43.722788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.866 [2024-11-26 19:30:43.722794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.733086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.733106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.733112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.742671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.742687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.742694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.751591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.751607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.751613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.760084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.760105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.760111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.769142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.769159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.769165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.777633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.777653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.777660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.786954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.786974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.786981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.794630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.794647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.794653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.128 [2024-11-26 19:30:43.804940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.128 [2024-11-26 19:30:43.804957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.128 [2024-11-26 19:30:43.804963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.813069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.813085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.813091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.822698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.822714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.822721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.831609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.831626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.831635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.839453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.839472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.839479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.850669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.850687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.850693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.858793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.858810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.858817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.868198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.868215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.868222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.878493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.878510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.878516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.888796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.888814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.888823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.896749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.896765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.896771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.906200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.906217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.906223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.916000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.916016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.916023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.923847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.923863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.923870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.932066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.932082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.932092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.942562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.942579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.942585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.951402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.951419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.951425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.960769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.960786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.960793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.969276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.969293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.969299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.981301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.981318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.981324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.129 [2024-11-26 19:30:43.989645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.129 [2024-11-26 19:30:43.989662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.129 [2024-11-26 19:30:43.989668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:43.999642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:43.999658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:43.999664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.007987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.008004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.008010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.019013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.019032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.019038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.027484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.027501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.027507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.036558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.036575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.036581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.046433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.046451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.046457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.055826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.055843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.055849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.064108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.064126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.064132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.072781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.072798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.072804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.081521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.081537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.081544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.092105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.092122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.092129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.102276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.102293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:19979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.102299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.111751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.111768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.111774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.121916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.121933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.121939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.130312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.130329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.130335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.139729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.139746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.139752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.149940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.149957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.149963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.159230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.159246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.159253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.168258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.168275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.168281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.177668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.177688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.177694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.185756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.391 [2024-11-26 19:30:44.185772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.391 [2024-11-26 19:30:44.185778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.391 [2024-11-26 19:30:44.196008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.392 [2024-11-26 19:30:44.196025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.392 [2024-11-26 19:30:44.196032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.392 [2024-11-26 19:30:44.206337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.392 [2024-11-26 19:30:44.206354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.392 [2024-11-26 19:30:44.206360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.392 [2024-11-26 19:30:44.214838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.392 [2024-11-26 19:30:44.214855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.392 [2024-11-26 19:30:44.214864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.392 [2024-11-26 19:30:44.224177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.392 [2024-11-26 19:30:44.224193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.392 [2024-11-26 19:30:44.224200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.392 [2024-11-26 19:30:44.232705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.392 [2024-11-26 19:30:44.232722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.392 [2024-11-26 19:30:44.232730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.392 [2024-11-26 19:30:44.242154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.392 [2024-11-26 19:30:44.242170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.392 [2024-11-26 19:30:44.242176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.392 [2024-11-26 19:30:44.250624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.392 [2024-11-26 19:30:44.250641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.392 [2024-11-26 19:30:44.250647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.652 [2024-11-26 19:30:44.260901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.260920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.260927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.269561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.269578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.269585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.278959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.278976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.278982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.287453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.287471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.287477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.296025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.296042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.296048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.304551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.304568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.304574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.314293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.314310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.314316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.323147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.323164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.323170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.333532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.333549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.333558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.343165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.343182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.343188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.351437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.351454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.351460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.361086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.361107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.361113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 27185.00 IOPS, 106.19 MiB/s [2024-11-26T18:30:44.518Z] [2024-11-26 19:30:44.369809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.369827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.369834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.378713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.378731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.378737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.390767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.390784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.390790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.402782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.402800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.402806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.413944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.413962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.413969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.421715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.421736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.421742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.653 [2024-11-26 19:30:44.431969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.653 [2024-11-26 19:30:44.431987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.653 [2024-11-26 19:30:44.431993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.440588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.440605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.440612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.449923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.449940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.449946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.459002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.459019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.459025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.468580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.468598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.468604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.476560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.476577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.476583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.486048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.486065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.486071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.494884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.494902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.494911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.502885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.502902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.502909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.654 [2024-11-26 19:30:44.512757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.654 [2024-11-26 19:30:44.512774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.654 [2024-11-26 19:30:44.512780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.521380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.521397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.521403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.530159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.530176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.530182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.539745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.539763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.539769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.547976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.547993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.548001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.556620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.556637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.556643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.566066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.566084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.566090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.575075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.575096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.575107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.584647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.584664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.584670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.593698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.593714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.593721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.602928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.602944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.602950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.613806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.613824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.613830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.621966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.621984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.621990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.631762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.631779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.631786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.641161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.641178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.641184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.649741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.649758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.649764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.658146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.658164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.658171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.667714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.667732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.667738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.679562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.679579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.679586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.688072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.688089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.688095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.697925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.697941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.915 [2024-11-26 19:30:44.697947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.915 [2024-11-26 19:30:44.709092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.915 [2024-11-26 19:30:44.709113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.916 [2024-11-26 19:30:44.709120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.916 [2024-11-26 19:30:44.719479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.916 [2024-11-26 19:30:44.719496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.916 [2024-11-26 19:30:44.719502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.916 [2024-11-26 19:30:44.729922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.916 [2024-11-26 19:30:44.729939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.916 [2024-11-26 19:30:44.729946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.916 [2024-11-26 19:30:44.742129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.916 [2024-11-26 19:30:44.742148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.916 [2024-11-26 19:30:44.742159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.916 [2024-11-26 19:30:44.753909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.916 [2024-11-26 19:30:44.753927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.916 [2024-11-26 19:30:44.753934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.916 [2024-11-26 19:30:44.765659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.916 [2024-11-26 19:30:44.765677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.916 [2024-11-26 19:30:44.765683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:10.916 [2024-11-26 19:30:44.777146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:10.916 [2024-11-26 19:30:44.777163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.916 [2024-11-26 19:30:44.777169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.786085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.786106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.786112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.794730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.794747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.794753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.803408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.803426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.803432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.812755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.812773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.812779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.821508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.821525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.821532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.829948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.829969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.829975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.841128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.841146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.841152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.848808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.848825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.848831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.858414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.858432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.858439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.867383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.867400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.867406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.875591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.875609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.875615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.885631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.885649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.885655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.894998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.895015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.895021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.904402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.904419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.904425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.913115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.913132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.913139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.921842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.921859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.177 [2024-11-26 19:30:44.921866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.177 [2024-11-26 19:30:44.931552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.177 [2024-11-26 19:30:44.931570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:44.931576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:44.940180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:44.940198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:44.940204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:44.949976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:44.949993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:44.949999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:44.958882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:44.958899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:44.958905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:44.967067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:44.967085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:10112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:44.967091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:44.976419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:44.976436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:44.976442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:44.985974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:44.985994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:44.986001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:44.995231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:44.995248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:44.995254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:45.004193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:45.004210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:45.004216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:45.012585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:45.012603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:45.012610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:45.022394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:45.022411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:45.022417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:45.032846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:45.032864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:45.032871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.178 [2024-11-26 19:30:45.040374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.178 [2024-11-26 19:30:45.040393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.178 [2024-11-26 19:30:45.040400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.050148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.050167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.050173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.060170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.060188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.060195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.069162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.069179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.069187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.077015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.077033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.077039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.088085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.088107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.088114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.099450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.099467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.099473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.111491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.111507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.111513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.121916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.121933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.121940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.130782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.130799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.130805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.142006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.142026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.142032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.149925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.149942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.149952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.159550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.159567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.159574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.168334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.168351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.168357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.440 [2024-11-26 19:30:45.177405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.440 [2024-11-26 19:30:45.177422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.440 [2024-11-26 19:30:45.177428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.185679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.185695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.185701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.195350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.195367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.195373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.204728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.204745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.204751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.213404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.213424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.213431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.223088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.223109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.223115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.232324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.232348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.232354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.241047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.241064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.241070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.250384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.250401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.250407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.259661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.259678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.259684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.268056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.268073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.268079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.277072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.277089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.277095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.286047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.286063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.286070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.441 [2024-11-26 19:30:45.295021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.441 [2024-11-26 19:30:45.295039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.441 [2024-11-26 19:30:45.295045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.702 [2024-11-26 19:30:45.303928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.703 [2024-11-26 19:30:45.303945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.703 [2024-11-26 19:30:45.303951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.703 [2024-11-26 19:30:45.313208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.703 [2024-11-26 19:30:45.313225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.703 [2024-11-26 19:30:45.313231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.703 [2024-11-26 19:30:45.321804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.703 [2024-11-26 19:30:45.321821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.703 [2024-11-26 19:30:45.321827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.703 [2024-11-26 19:30:45.331287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.703 [2024-11-26 19:30:45.331304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.703 [2024-11-26 19:30:45.331310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.703 [2024-11-26 19:30:45.340251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.703 [2024-11-26 19:30:45.340268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.703 [2024-11-26 19:30:45.340274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.703 [2024-11-26 19:30:45.349235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.703 [2024-11-26 19:30:45.349251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.703 [2024-11-26 19:30:45.349257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.703 [2024-11-26 19:30:45.357728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.703 [2024-11-26 19:30:45.357744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.703 [2024-11-26 19:30:45.357750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.703 [2024-11-26 19:30:45.367323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c8e80) 00:24:11.703 [2024-11-26 19:30:45.367340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.703 [2024-11-26 19:30:45.367347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.703 27185.50 IOPS, 106.19 MiB/s 00:24:11.703 Latency(us) 00:24:11.703 [2024-11-26T18:30:45.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.703 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:11.703 nvme0n1 : 2.00 27191.05 106.22 0.00 0.00 4701.52 2198.19 15837.87 00:24:11.703 [2024-11-26T18:30:45.568Z] =================================================================================================================== 00:24:11.703 [2024-11-26T18:30:45.568Z] Total : 27191.05 106.22 0.00 0.00 4701.52 2198.19 15837.87 00:24:11.703 { 00:24:11.703 "results": [ 00:24:11.703 { 00:24:11.703 "job": "nvme0n1", 00:24:11.703 "core_mask": "0x2", 00:24:11.703 "workload": "randread", 00:24:11.703 "status": "finished", 00:24:11.703 "queue_depth": 128, 00:24:11.703 "io_size": 4096, 00:24:11.703 "runtime": 2.004299, 00:24:11.703 "iops": 27191.052831937748, 00:24:11.703 "mibps": 106.21505012475683, 00:24:11.703 "io_failed": 0, 00:24:11.703 "io_timeout": 0, 00:24:11.703 "avg_latency_us": 4701.515525300158, 00:24:11.703 "min_latency_us": 2198.1866666666665, 00:24:11.703 "max_latency_us": 15837.866666666667 00:24:11.703 } 00:24:11.703 ], 00:24:11.703 "core_count": 1 00:24:11.703 } 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:11.703 | .driver_specific 00:24:11.703 | .nvme_error 00:24:11.703 | .status_code 00:24:11.703 | .command_transient_transport_error' 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3888371 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3888371 ']' 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3888371 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.703 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3888371 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3888371' 00:24:11.963 killing process with pid 3888371 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3888371 00:24:11.963 Received shutdown signal, test time was about 2.000000 seconds 00:24:11.963 00:24:11.963 Latency(us) 00:24:11.963 [2024-11-26T18:30:45.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.963 [2024-11-26T18:30:45.828Z] =================================================================================================================== 00:24:11.963 [2024-11-26T18:30:45.828Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3888371 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3889050 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3889050 /var/tmp/bperf.sock 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3889050 ']' 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:11.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.963 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:11.963 [2024-11-26 19:30:45.729292] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:11.963 [2024-11-26 19:30:45.729348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889050 ] 00:24:11.963 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:11.963 Zero copy mechanism will not be used. 00:24:11.963 [2024-11-26 19:30:45.792983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.963 [2024-11-26 19:30:45.822466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.223 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:12.223 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:12.223 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.223 19:30:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:12.223 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:12.223 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.223 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.223 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.223 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:12.223 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:12.482 nvme0n1 00:24:12.482 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:12.482 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.482 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.482 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.483 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:12.483 19:30:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:12.745 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.745 Zero copy mechanism will not be used. 00:24:12.745 Running I/O for 2 seconds... 00:24:12.745 [2024-11-26 19:30:46.410579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.410615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.410625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.420605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.420629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.420637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.431380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.431400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.431407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.441421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.441441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.441448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.449391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.449408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.449415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.453638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.453657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.453663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.457483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.457500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.457507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.464848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.464866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.464872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.471366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.471384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.471391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.478424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.478441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.478447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.487457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.487475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.487481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.497395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.497412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.497418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.508488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.508505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.508512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.519157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.519175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.519181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.530856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.530874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.530880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.542770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.745 [2024-11-26 19:30:46.542787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.745 [2024-11-26 19:30:46.542793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:12.745 [2024-11-26 19:30:46.554609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.746 [2024-11-26 19:30:46.554626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.746 [2024-11-26 19:30:46.554632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:12.746 [2024-11-26 19:30:46.565961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.746 [2024-11-26 19:30:46.565978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.746 [2024-11-26 19:30:46.565988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:12.746 [2024-11-26 19:30:46.577590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.746 [2024-11-26 19:30:46.577607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.746 [2024-11-26 19:30:46.577613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:12.746 [2024-11-26 19:30:46.587640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.746 [2024-11-26 19:30:46.587657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.746 [2024-11-26 19:30:46.587663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:12.746 [2024-11-26 19:30:46.598588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:12.746 [2024-11-26 19:30:46.598606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:12.746 [2024-11-26 19:30:46.598612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.008 [2024-11-26 19:30:46.609889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.008 [2024-11-26 19:30:46.609906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.008 [2024-11-26 19:30:46.609913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.008 [2024-11-26 19:30:46.621696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.008 [2024-11-26 19:30:46.621713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.008 [2024-11-26 19:30:46.621720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.008 [2024-11-26 19:30:46.633202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.008 [2024-11-26 19:30:46.633219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.008 [2024-11-26 19:30:46.633225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.008 [2024-11-26 19:30:46.644336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.008 [2024-11-26 19:30:46.644354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.008 [2024-11-26 19:30:46.644360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.008 [2024-11-26 19:30:46.655771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.008 [2024-11-26 19:30:46.655788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.655794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.667011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.667032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.667039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.678181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.678199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.678206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.689679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.689696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.689702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.700968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.700985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.700991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.712364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.712381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.712387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.724033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.724051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.724057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.734902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.734919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.734926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.744937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.744954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.744960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.756397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.756414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.756423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.767665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.767683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.767689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.777385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.777403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.777410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.787411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.787428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.787435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.795923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.795940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.795946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.802819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.802836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.802842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.813139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.813156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.813162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.822055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.822072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.822079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.830917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.830933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.830939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.840569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.009 [2024-11-26 19:30:46.840590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.009 [2024-11-26 19:30:46.840597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.009 [2024-11-26 19:30:46.849575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.010 [2024-11-26 19:30:46.849592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.010 [2024-11-26 19:30:46.849598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.010 [2024-11-26 19:30:46.858779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.010 [2024-11-26 19:30:46.858796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.010 [2024-11-26 19:30:46.858802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.010 [2024-11-26 19:30:46.864756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.010 [2024-11-26 19:30:46.864773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.010 [2024-11-26 19:30:46.864779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.873727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.873744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.873751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.885875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.885893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.885899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.893422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.893440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.893446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.899964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.899982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.899988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.903516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.903533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.903539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.906938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.906955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.906962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.910273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.910290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.910296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.913670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.913687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.913693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.916916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.916933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.916940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.920317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.920334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.920341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.928634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.928652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.928658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.937072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.937090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.937096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.946079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.946096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.946106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.955605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.955623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.955632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.962159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.962175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.962181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.965690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.965707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.965713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.969556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.969573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.969579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.972968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.972985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.972992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.976374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.976391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.976397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.984906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.984924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.984930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.993792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.993809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.993815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:46.998609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:46.998626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:46.998633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:47.007913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:47.007934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:47.007940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:47.014933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:47.014950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:47.014957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.273 [2024-11-26 19:30:47.021942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.273 [2024-11-26 19:30:47.021959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.273 [2024-11-26 19:30:47.021966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.030209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.030226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.030232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.039188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.039205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.039211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.046605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.046621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.046628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.054734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.054751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.054757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.061327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.061344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.061350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.071741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.071758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.071764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.079093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.079116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.079122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.086231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.086248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.086254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.096078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.096096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.096108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.102171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.102190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.102196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.109497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.109515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.109521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.120077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.120095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.120107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.274 [2024-11-26 19:30:47.130426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.274 [2024-11-26 19:30:47.130444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.274 [2024-11-26 19:30:47.130450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.138298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.536 [2024-11-26 19:30:47.138315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.536 [2024-11-26 19:30:47.138322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.144021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.536 [2024-11-26 19:30:47.144038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.536 [2024-11-26 19:30:47.144048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.147376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.536 [2024-11-26 19:30:47.147394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.536 [2024-11-26 19:30:47.147400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.151401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.536 [2024-11-26 19:30:47.151418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.536 [2024-11-26 19:30:47.151425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.156554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.536 [2024-11-26 19:30:47.156571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.536 [2024-11-26 19:30:47.156578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.159797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.536 [2024-11-26 19:30:47.159815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.536 [2024-11-26 19:30:47.159821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.163389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.536 [2024-11-26 19:30:47.163406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.536 [2024-11-26 19:30:47.163412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.171187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.536 [2024-11-26 19:30:47.171204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.536 [2024-11-26 19:30:47.171210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.536 [2024-11-26 19:30:47.181714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.181731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.181737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.193225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.193242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.193249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.204358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.204377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.204383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.215835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.215852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.215859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.225784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.225801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.225807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.234723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.234741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.234747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.243929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.243946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.243953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.254886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.254903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.254909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.263855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.263872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.263879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.271866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.271882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.271888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.282025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.282042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.282052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.292434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.292451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.292457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.301706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.301723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.301729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.310661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.310678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.310684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.319382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.319399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.319406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.328168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.328185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.328191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.336859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.336876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.336882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.345900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.345916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.345923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.354736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.354753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.354759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.363502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.363522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.363529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.372264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.372282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.372288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.380774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.380790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.380797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.387789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.387806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.387813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.537 [2024-11-26 19:30:47.397271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.537 [2024-11-26 19:30:47.397287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.537 [2024-11-26 19:30:47.397294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.799 3612.00 IOPS, 451.50 MiB/s [2024-11-26T18:30:47.664Z] [2024-11-26 19:30:47.407576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.799 [2024-11-26 19:30:47.407592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.799 [2024-11-26 19:30:47.407599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.799 [2024-11-26 19:30:47.417087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.799 [2024-11-26 19:30:47.417110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.799 [2024-11-26 19:30:47.417116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.799 [2024-11-26 19:30:47.428399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.799 [2024-11-26 19:30:47.428416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.428422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.437158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.437175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.437181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.447259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.447277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.447283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.457876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.457894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.457900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.464707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.464725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.464731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.472067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.472085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.472092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.482363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.482380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.482387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.489597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.489615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.489621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.498669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.498687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.498694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.509088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.509111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.509118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.519152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.519170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.519179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.528835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.528853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.528859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.539069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.539088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.539094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.547173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.547192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.547199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.557923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.557941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.557947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.566066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.566083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.566090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.576808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.576826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.576832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.584566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.584583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.584589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.595646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.595664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.595670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.603614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.603636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.603643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.614421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.614439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.614445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.624936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.624954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.624960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.635480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.635497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.635504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.645732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.645750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.645756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:13.800 [2024-11-26 19:30:47.655776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:13.800 [2024-11-26 19:30:47.655794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.800 [2024-11-26 19:30:47.655801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.062 [2024-11-26 19:30:47.666581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.062 [2024-11-26 19:30:47.666600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.062 [2024-11-26 19:30:47.666606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.062 [2024-11-26 19:30:47.677200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.677217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.677223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.688052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.688070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.688078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.699046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.699064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.699071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.709866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.709883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.709889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.718765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.718783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.718790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.728347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.728365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.728371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.736695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.736713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.736719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.746230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.746247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.746254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.756378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.756396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.756402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.766941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.766959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.766965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.775122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.775139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.775149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.782980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.782997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.783003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.790586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.790604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.790610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.798428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.798446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.798452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.809211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.809228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.809234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.819390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.819407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.819414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.829591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.829609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.829615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.839365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.839383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.839390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.849728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.849745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.849751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.858162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.858180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.858186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.868780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.868798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.868804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.877365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.877383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.877389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.887016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.887034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.887040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.897217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.897235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.897241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.908108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.908126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.908132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.063 [2024-11-26 19:30:47.918862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.063 [2024-11-26 19:30:47.918880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.063 [2024-11-26 19:30:47.918886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.324 [2024-11-26 19:30:47.930775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.324 [2024-11-26 19:30:47.930792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-26 19:30:47.930799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.324 [2024-11-26 19:30:47.942538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.324 [2024-11-26 19:30:47.942556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-26 19:30:47.942566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.324 [2024-11-26 19:30:47.954527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.324 [2024-11-26 19:30:47.954545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-26 19:30:47.954552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.324 [2024-11-26 19:30:47.966333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.324 [2024-11-26 19:30:47.966351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-26 19:30:47.966357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.324 [2024-11-26 19:30:47.978093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.324 [2024-11-26 19:30:47.978117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.324 [2024-11-26 19:30:47.978123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.324 [2024-11-26 19:30:47.989742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.324 [2024-11-26 19:30:47.989760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:47.989767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.000951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.000969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.000975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.012735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.012752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.012758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.024072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.024090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.024097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.035405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.035422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.035428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.047163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.047183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.047189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.057756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.057774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.057780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.067266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.067282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.067288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.077532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.077550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.077556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.088194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.088211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.088217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.098114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.098132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.098138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.108241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.108259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.108265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.119232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.119250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.119258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.129488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.129506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.129512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.138306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.138324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.138331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.148574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.148591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.148597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.159145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.159162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.159168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.169272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.169290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.169296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.325 [2024-11-26 19:30:48.179274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.325 [2024-11-26 19:30:48.179292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.325 [2024-11-26 19:30:48.179298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.189726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.189743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.189749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.201076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.201094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.201105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.211517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.211535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.211541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.221939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.221956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.221965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.233612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.233629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.233636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.244898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.244916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.244922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.256701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.256719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.256725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.266713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.266730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.266737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.277017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.277034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.277040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.288317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.288335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.288341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.299557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.299574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.299581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.310948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.310965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.310972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.321453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.321475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.321481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.332624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.332642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.332648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.342789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.342806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.342812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.351838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.351856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.351862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.362091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.362113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.362119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.371768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.371785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.371792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.382151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.382169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.382175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.392344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.392361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.392367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:14.585 [2024-11-26 19:30:48.402479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x175e5a0) 00:24:14.585 [2024-11-26 19:30:48.402497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.585 [2024-11-26 19:30:48.402503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:14.585 3338.00 IOPS, 417.25 MiB/s 00:24:14.585 Latency(us) 00:24:14.585 [2024-11-26T18:30:48.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.585 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:14.585 nvme0n1 : 2.00 3340.55 417.57 0.00 0.00 4787.44 921.60 14745.60 00:24:14.585 [2024-11-26T18:30:48.450Z] =================================================================================================================== 00:24:14.585 [2024-11-26T18:30:48.450Z] Total : 3340.55 417.57 0.00 0.00 4787.44 921.60 14745.60 00:24:14.585 { 00:24:14.585 "results": [ 00:24:14.585 { 00:24:14.585 "job": "nvme0n1", 00:24:14.585 "core_mask": "0x2", 00:24:14.585 "workload": "randread", 00:24:14.585 "status": "finished", 00:24:14.585 "queue_depth": 16, 00:24:14.585 "io_size": 131072, 00:24:14.585 "runtime": 2.003264, 00:24:14.585 "iops": 3340.5482252963166, 00:24:14.585 "mibps": 417.56852816203957, 00:24:14.585 "io_failed": 0, 00:24:14.585 "io_timeout": 0, 00:24:14.585 "avg_latency_us": 4787.440749153217, 00:24:14.585 "min_latency_us": 921.6, 00:24:14.585 "max_latency_us": 14745.6 00:24:14.585 } 00:24:14.585 ], 00:24:14.585 "core_count": 1 00:24:14.585 } 00:24:14.585 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:14.585 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:14.585 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:14.585 | .driver_specific 00:24:14.585 | .nvme_error 00:24:14.585 | .status_code 00:24:14.585 | .command_transient_transport_error' 00:24:14.585 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3889050 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3889050 ']' 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3889050 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3889050 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3889050' 00:24:14.845 killing process with pid 3889050 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3889050 00:24:14.845 Received shutdown signal, test time was about 2.000000 seconds 00:24:14.845 00:24:14.845 Latency(us) 00:24:14.845 [2024-11-26T18:30:48.710Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:14.845 [2024-11-26T18:30:48.710Z] =================================================================================================================== 00:24:14.845 [2024-11-26T18:30:48.710Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:14.845 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3889050 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3889725 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3889725 /var/tmp/bperf.sock 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3889725 ']' 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:15.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:15.103 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.104 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.104 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:15.104 [2024-11-26 19:30:48.769246] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:15.104 [2024-11-26 19:30:48.769299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889725 ] 00:24:15.104 [2024-11-26 19:30:48.834064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.104 [2024-11-26 19:30:48.862150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.104 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.104 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:15.104 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:15.104 19:30:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:15.362 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:15.362 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.362 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.362 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.362 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.362 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:15.621 nvme0n1 00:24:15.621 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:15.621 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.621 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.621 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.621 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:15.621 19:30:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:15.882 Running I/O for 2 seconds... 00:24:15.882 [2024-11-26 19:30:49.524250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee23b8 00:24:15.882 [2024-11-26 19:30:49.525141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.525168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.532809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eec840 00:24:15.882 [2024-11-26 19:30:49.533716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.533738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.541500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eed920 00:24:15.882 [2024-11-26 19:30:49.542370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.542388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.550086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef20d8 00:24:15.882 [2024-11-26 19:30:49.550994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.551013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.558615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef0ff8 00:24:15.882 [2024-11-26 19:30:49.559538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.559555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.567115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeff18 00:24:15.882 [2024-11-26 19:30:49.568015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.568031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.575607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeee38 00:24:15.882 [2024-11-26 19:30:49.576504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.576523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.584096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eedd58 00:24:15.882 [2024-11-26 19:30:49.584998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.585015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.592575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee1f80 00:24:15.882 [2024-11-26 19:30:49.593472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.593490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.602113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee3060 00:24:15.882 [2024-11-26 19:30:49.603471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.603488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.609694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eec408 00:24:15.882 [2024-11-26 19:30:49.610610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.610626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.618078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:15.882 [2024-11-26 19:30:49.618987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.619004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.626548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef7970 00:24:15.882 [2024-11-26 19:30:49.627469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.627488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.635010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edece0 00:24:15.882 [2024-11-26 19:30:49.635913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.635930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.643462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7c50 00:24:15.882 [2024-11-26 19:30:49.644343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.644360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.651914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeaef0 00:24:15.882 [2024-11-26 19:30:49.652814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.882 [2024-11-26 19:30:49.652833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.882 [2024-11-26 19:30:49.660388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:15.882 [2024-11-26 19:30:49.661304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.661326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.668862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef3e60 00:24:15.883 [2024-11-26 19:30:49.669767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.669785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.677347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef8e88 00:24:15.883 [2024-11-26 19:30:49.678289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.678306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.685796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5ec8 00:24:15.883 [2024-11-26 19:30:49.686696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.686715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.694270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee9168 00:24:15.883 [2024-11-26 19:30:49.695129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.695147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.702731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eec408 00:24:15.883 [2024-11-26 19:30:49.703631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.703649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.711196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:15.883 [2024-11-26 19:30:49.712122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.712138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.719673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef7970 00:24:15.883 [2024-11-26 19:30:49.720575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.720592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.728137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edece0 00:24:15.883 [2024-11-26 19:30:49.729002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.729019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.736587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7c50 00:24:15.883 [2024-11-26 19:30:49.737491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:22502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:15.883 [2024-11-26 19:30:49.737509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:15.883 [2024-11-26 19:30:49.745173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeaef0 00:24:16.145 [2024-11-26 19:30:49.746094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.746115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.753633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:16.145 [2024-11-26 19:30:49.754546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.754562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.762120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef3e60 00:24:16.145 [2024-11-26 19:30:49.763015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.763033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.770584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef8e88 00:24:16.145 [2024-11-26 19:30:49.771458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.771477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.779031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5ec8 00:24:16.145 [2024-11-26 19:30:49.779934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.779952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.787483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee9168 00:24:16.145 [2024-11-26 19:30:49.788381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.788398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.795930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eec408 00:24:16.145 [2024-11-26 19:30:49.796840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.796856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.804402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:16.145 [2024-11-26 19:30:49.805263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.805281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.812869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef7970 00:24:16.145 [2024-11-26 19:30:49.813753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.813772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.821325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edece0 00:24:16.145 [2024-11-26 19:30:49.822242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.822259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.829781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7c50 00:24:16.145 [2024-11-26 19:30:49.830687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.830703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.838240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeaef0 00:24:16.145 [2024-11-26 19:30:49.839116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.839134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.846703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:16.145 [2024-11-26 19:30:49.847586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.847602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.855334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef3e60 00:24:16.145 [2024-11-26 19:30:49.856249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.856266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.863802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef8e88 00:24:16.145 [2024-11-26 19:30:49.864732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.864748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.872257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5ec8 00:24:16.145 [2024-11-26 19:30:49.873139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.873155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.880696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee9168 00:24:16.145 [2024-11-26 19:30:49.881564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.881584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.889160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eec408 00:24:16.145 [2024-11-26 19:30:49.890084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.890103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.897619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:16.145 [2024-11-26 19:30:49.898486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.145 [2024-11-26 19:30:49.898504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.145 [2024-11-26 19:30:49.906083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef7970 00:24:16.145 [2024-11-26 19:30:49.906981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.907000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.914540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edece0 00:24:16.146 [2024-11-26 19:30:49.915436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.915455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.922989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7c50 00:24:16.146 [2024-11-26 19:30:49.923887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.923905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.931450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeaef0 00:24:16.146 [2024-11-26 19:30:49.932358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.932374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.939912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:16.146 [2024-11-26 19:30:49.940815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.940833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.948384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef3e60 00:24:16.146 [2024-11-26 19:30:49.949292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.949311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.956849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef8e88 00:24:16.146 [2024-11-26 19:30:49.957764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:9219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.957781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.965324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5ec8 00:24:16.146 [2024-11-26 19:30:49.966252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:20728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.966269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.973778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee9168 00:24:16.146 [2024-11-26 19:30:49.974678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.974697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.982244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eec408 00:24:16.146 [2024-11-26 19:30:49.983137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.983154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.990704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:16.146 [2024-11-26 19:30:49.991568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:49.991587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:49.999185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef7970 00:24:16.146 [2024-11-26 19:30:50.000066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.146 [2024-11-26 19:30:50.000081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.146 [2024-11-26 19:30:50.008186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edece0 00:24:16.407 [2024-11-26 19:30:50.009089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.009112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.016639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7c50 00:24:16.407 [2024-11-26 19:30:50.017546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.017565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.025104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeaef0 00:24:16.407 [2024-11-26 19:30:50.025997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.026016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.033577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:16.407 [2024-11-26 19:30:50.034448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.034465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.041521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efb048 00:24:16.407 [2024-11-26 19:30:50.042440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.042455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.051197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5658 00:24:16.407 [2024-11-26 19:30:50.052335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.052351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.058320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7818 00:24:16.407 [2024-11-26 19:30:50.059006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.059022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.066815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efc560 00:24:16.407 [2024-11-26 19:30:50.067527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.067544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.075281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee0ea0 00:24:16.407 [2024-11-26 19:30:50.075965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.075984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.083755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6020 00:24:16.407 [2024-11-26 19:30:50.084449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.084466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.092235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6cc8 00:24:16.407 [2024-11-26 19:30:50.092930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:8339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.092946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.100693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ede038 00:24:16.407 [2024-11-26 19:30:50.101380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.101399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.109144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efa7d8 00:24:16.407 [2024-11-26 19:30:50.109867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.109883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.117607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efe2e8 00:24:16.407 [2024-11-26 19:30:50.118302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.118320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.407 [2024-11-26 19:30:50.126068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eecc78 00:24:16.407 [2024-11-26 19:30:50.126764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.407 [2024-11-26 19:30:50.126780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.134543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef4b08 00:24:16.408 [2024-11-26 19:30:50.135248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:6529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.135267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.143014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef81e0 00:24:16.408 [2024-11-26 19:30:50.143710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.143727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.151468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7818 00:24:16.408 [2024-11-26 19:30:50.152185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.152201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.159941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efc560 00:24:16.408 [2024-11-26 19:30:50.160637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.160653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.168411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee0ea0 00:24:16.408 [2024-11-26 19:30:50.169124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.169140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.176882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6020 00:24:16.408 [2024-11-26 19:30:50.177539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.177558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.185364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6cc8 00:24:16.408 [2024-11-26 19:30:50.186084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.186103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.193822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ede038 00:24:16.408 [2024-11-26 19:30:50.194477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.194494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.202271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efa7d8 00:24:16.408 [2024-11-26 19:30:50.202959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.202976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.210726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efe2e8 00:24:16.408 [2024-11-26 19:30:50.211380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.211396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.219198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eecc78 00:24:16.408 [2024-11-26 19:30:50.219896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.219914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.227673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef4b08 00:24:16.408 [2024-11-26 19:30:50.228360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.228380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.236136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef81e0 00:24:16.408 [2024-11-26 19:30:50.236705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.236722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.244595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7818 00:24:16.408 [2024-11-26 19:30:50.245280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.245298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.253041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efc560 00:24:16.408 [2024-11-26 19:30:50.253743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.253759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.261525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee0ea0 00:24:16.408 [2024-11-26 19:30:50.262231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.262249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.408 [2024-11-26 19:30:50.269998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6020 00:24:16.408 [2024-11-26 19:30:50.270676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.408 [2024-11-26 19:30:50.270695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.278471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6cc8 00:24:16.670 [2024-11-26 19:30:50.279137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-11-26 19:30:50.279156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.286918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ede038 00:24:16.670 [2024-11-26 19:30:50.287619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-11-26 19:30:50.287636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.295372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efa7d8 00:24:16.670 [2024-11-26 19:30:50.296085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-11-26 19:30:50.296106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.303826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efe2e8 00:24:16.670 [2024-11-26 19:30:50.304524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-11-26 19:30:50.304542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.312298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eecc78 00:24:16.670 [2024-11-26 19:30:50.312985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-11-26 19:30:50.313004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.320767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef4b08 00:24:16.670 [2024-11-26 19:30:50.321421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-11-26 19:30:50.321439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.329229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef81e0 00:24:16.670 [2024-11-26 19:30:50.329912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-11-26 19:30:50.329933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.337691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee7818 00:24:16.670 [2024-11-26 19:30:50.338394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.670 [2024-11-26 19:30:50.338411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:16.670 [2024-11-26 19:30:50.346445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee8088 00:24:16.670 [2024-11-26 19:30:50.347242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.347257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.354923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eec840 00:24:16.671 [2024-11-26 19:30:50.355724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.355740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.363664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef1ca0 00:24:16.671 [2024-11-26 19:30:50.364478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.364495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.373195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef2d80 00:24:16.671 [2024-11-26 19:30:50.374462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.374478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.380831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5a90 00:24:16.671 [2024-11-26 19:30:50.381652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.381668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.389227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee99d8 00:24:16.671 [2024-11-26 19:30:50.390032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.390048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.397685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efac10 00:24:16.671 [2024-11-26 19:30:50.398505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.398525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.406161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee3060 00:24:16.671 [2024-11-26 19:30:50.406971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.406987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.414641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:16.671 [2024-11-26 19:30:50.415466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.415483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.423091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:16.671 [2024-11-26 19:30:50.423909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.423925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.431551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf988 00:24:16.671 [2024-11-26 19:30:50.432366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.432383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.441033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef9b30 00:24:16.671 [2024-11-26 19:30:50.442270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.442286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.449838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef1430 00:24:16.671 [2024-11-26 19:30:50.451114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.451129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.456964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef4f40 00:24:16.671 [2024-11-26 19:30:50.457786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.457802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.465437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee6fa8 00:24:16.671 [2024-11-26 19:30:50.466280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.466297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.473891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef0ff8 00:24:16.671 [2024-11-26 19:30:50.474708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.474724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.482350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eee190 00:24:16.671 [2024-11-26 19:30:50.483137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.483153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.490817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edece0 00:24:16.671 [2024-11-26 19:30:50.491631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.491647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.499291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eebb98 00:24:16.671 [2024-11-26 19:30:50.500117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.500133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.507757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5be8 00:24:16.671 [2024-11-26 19:30:50.508561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.508577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.516195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee9e10 00:24:16.671 29995.00 IOPS, 117.17 MiB/s [2024-11-26T18:30:50.536Z] [2024-11-26 19:30:50.517163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.517178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.524641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee1710 00:24:16.671 [2024-11-26 19:30:50.525450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.671 [2024-11-26 19:30:50.525470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.671 [2024-11-26 19:30:50.533112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee73e0 00:24:16.933 [2024-11-26 19:30:50.533870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.533886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.541585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf118 00:24:16.933 [2024-11-26 19:30:50.542379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.542396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.550057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efb8b8 00:24:16.933 [2024-11-26 19:30:50.550853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.550869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.558525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efd640 00:24:16.933 [2024-11-26 19:30:50.559333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.559350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.566990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eea680 00:24:16.933 [2024-11-26 19:30:50.567757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.567774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.575451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef46d0 00:24:16.933 [2024-11-26 19:30:50.576271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.576287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.583926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee12d8 00:24:16.933 [2024-11-26 19:30:50.584732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.584749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.592404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edece0 00:24:16.933 [2024-11-26 19:30:50.593231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.593248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.600909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eebb98 00:24:16.933 [2024-11-26 19:30:50.601710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.601727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.609531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5be8 00:24:16.933 [2024-11-26 19:30:50.610299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.610314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.618000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee9e10 00:24:16.933 [2024-11-26 19:30:50.618812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.618831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.626474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee1710 00:24:16.933 [2024-11-26 19:30:50.627259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.627275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.634949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee73e0 00:24:16.933 [2024-11-26 19:30:50.635708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.635727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.644756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef8e88 00:24:16.933 [2024-11-26 19:30:50.645953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.645968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.651959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee3060 00:24:16.933 [2024-11-26 19:30:50.652751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.652767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.660374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efac10 00:24:16.933 [2024-11-26 19:30:50.661134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.661150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.668830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6890 00:24:16.933 [2024-11-26 19:30:50.669591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.933 [2024-11-26 19:30:50.669608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.933 [2024-11-26 19:30:50.677314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf988 00:24:16.934 [2024-11-26 19:30:50.678121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.678141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.685798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:16.934 [2024-11-26 19:30:50.686589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.686610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.694276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:16.934 [2024-11-26 19:30:50.695033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.695049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.702771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee2c28 00:24:16.934 [2024-11-26 19:30:50.703569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.703589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.711239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efa7d8 00:24:16.934 [2024-11-26 19:30:50.712030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.712051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.719703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6cc8 00:24:16.934 [2024-11-26 19:30:50.720495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.720515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.728188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf550 00:24:16.934 [2024-11-26 19:30:50.728975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.728991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.736674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee4de8 00:24:16.934 [2024-11-26 19:30:50.737463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.737480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.745154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef57b0 00:24:16.934 [2024-11-26 19:30:50.745947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.745968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.753612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee3060 00:24:16.934 [2024-11-26 19:30:50.754394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.754412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.762079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efac10 00:24:16.934 [2024-11-26 19:30:50.762839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.762856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.770564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6890 00:24:16.934 [2024-11-26 19:30:50.771322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.771338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.779042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf988 00:24:16.934 [2024-11-26 19:30:50.779833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.779849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.787515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:16.934 [2024-11-26 19:30:50.788292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:16.934 [2024-11-26 19:30:50.788310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:16.934 [2024-11-26 19:30:50.795969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:17.196 [2024-11-26 19:30:50.796786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.796806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.804435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee2c28 00:24:17.196 [2024-11-26 19:30:50.805251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.805271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.812903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efa7d8 00:24:17.196 [2024-11-26 19:30:50.813693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.813711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.821404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6cc8 00:24:17.196 [2024-11-26 19:30:50.822201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.822217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.829891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf550 00:24:17.196 [2024-11-26 19:30:50.830655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.830672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.838363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee4de8 00:24:17.196 [2024-11-26 19:30:50.839138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.839154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.846813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef57b0 00:24:17.196 [2024-11-26 19:30:50.847616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.847635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.855429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee3060 00:24:17.196 [2024-11-26 19:30:50.856232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.856252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.863922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efac10 00:24:17.196 [2024-11-26 19:30:50.864714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.864730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.872400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6890 00:24:17.196 [2024-11-26 19:30:50.873155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.873172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.880862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf988 00:24:17.196 [2024-11-26 19:30:50.881655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:23426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.881672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.889323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:17.196 [2024-11-26 19:30:50.890145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.890163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.897785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:17.196 [2024-11-26 19:30:50.898584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.898604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.906272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee2c28 00:24:17.196 [2024-11-26 19:30:50.907087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.907106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.914745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efa7d8 00:24:17.196 [2024-11-26 19:30:50.915551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.915573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.923236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6cc8 00:24:17.196 [2024-11-26 19:30:50.923982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.923998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.931694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf550 00:24:17.196 [2024-11-26 19:30:50.932490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.932510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.940151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee4de8 00:24:17.196 [2024-11-26 19:30:50.940924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.940941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.948614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef57b0 00:24:17.196 [2024-11-26 19:30:50.949422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.949442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.957089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee3060 00:24:17.196 [2024-11-26 19:30:50.957887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.957907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.965029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef0350 00:24:17.196 [2024-11-26 19:30:50.965815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.196 [2024-11-26 19:30:50.965831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:17.196 [2024-11-26 19:30:50.974476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eea680 00:24:17.197 [2024-11-26 19:30:50.975380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:50.975396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:50.982930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeea00 00:24:17.197 [2024-11-26 19:30:50.983848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:50.983865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:50.991440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eed920 00:24:17.197 [2024-11-26 19:30:50.992344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:50.992361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:50.999928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eed920 00:24:17.197 [2024-11-26 19:30:51.000841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:51.000858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:51.008696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:17.197 [2024-11-26 19:30:51.009587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:51.009604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:51.017177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee2c28 00:24:17.197 [2024-11-26 19:30:51.018080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:51.018096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:51.025646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5be8 00:24:17.197 [2024-11-26 19:30:51.026528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:51.026545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:51.034117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee9e10 00:24:17.197 [2024-11-26 19:30:51.034996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:51.035012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:51.042597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef4f40 00:24:17.197 [2024-11-26 19:30:51.043482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:51.043500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:51.051084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee6300 00:24:17.197 [2024-11-26 19:30:51.051951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.197 [2024-11-26 19:30:51.051968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.197 [2024-11-26 19:30:51.059585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeb328 00:24:17.457 [2024-11-26 19:30:51.060488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:12806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.457 [2024-11-26 19:30:51.060507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.457 [2024-11-26 19:30:51.068051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee1710 00:24:17.457 [2024-11-26 19:30:51.068935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.457 [2024-11-26 19:30:51.068952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.457 [2024-11-26 19:30:51.076514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee38d0 00:24:17.457 [2024-11-26 19:30:51.077389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.077405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.084972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeea00 00:24:17.458 [2024-11-26 19:30:51.085848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.085866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.093447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eed920 00:24:17.458 [2024-11-26 19:30:51.094329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.094348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.101911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5378 00:24:17.458 [2024-11-26 19:30:51.102795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.102812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.110385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee2c28 00:24:17.458 [2024-11-26 19:30:51.111279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.111297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.118834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5be8 00:24:17.458 [2024-11-26 19:30:51.119710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.119728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.127285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee9e10 00:24:17.458 [2024-11-26 19:30:51.128189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.128205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.135740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef4f40 00:24:17.458 [2024-11-26 19:30:51.136619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.136640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.144218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee6300 00:24:17.458 [2024-11-26 19:30:51.145111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.145129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.152694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeb328 00:24:17.458 [2024-11-26 19:30:51.153551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.153570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.162294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee1710 00:24:17.458 [2024-11-26 19:30:51.163701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.163718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.170190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016edf118 00:24:17.458 [2024-11-26 19:30:51.171148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.171165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.178056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eff3c8 00:24:17.458 [2024-11-26 19:30:51.179015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.179032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.186799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef0788 00:24:17.458 [2024-11-26 19:30:51.187718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.187737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.195427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef8618 00:24:17.458 [2024-11-26 19:30:51.196352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.196371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.204170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee4de8 00:24:17.458 [2024-11-26 19:30:51.205104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.205119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.212639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5be8 00:24:17.458 [2024-11-26 19:30:51.213595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.213611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.220448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee0ea0 00:24:17.458 [2024-11-26 19:30:51.221279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.221294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.229730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee84c0 00:24:17.458 [2024-11-26 19:30:51.230638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.230655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.238472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eee5c8 00:24:17.458 [2024-11-26 19:30:51.239393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.239409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.246531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef8618 00:24:17.458 [2024-11-26 19:30:51.247406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.247422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.254988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efcdd0 00:24:17.458 [2024-11-26 19:30:51.255892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.255911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.262943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efbcf0 00:24:17.458 [2024-11-26 19:30:51.263746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.263764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.272525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee88f8 00:24:17.458 [2024-11-26 19:30:51.273578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.273594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.281017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef4b08 00:24:17.458 [2024-11-26 19:30:51.282081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.282099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.289501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef0788 00:24:17.458 [2024-11-26 19:30:51.290511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.290529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.458 [2024-11-26 19:30:51.297973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eed0b0 00:24:17.458 [2024-11-26 19:30:51.299023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.458 [2024-11-26 19:30:51.299039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.459 [2024-11-26 19:30:51.306444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ede8a8 00:24:17.459 [2024-11-26 19:30:51.307456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.459 [2024-11-26 19:30:51.307473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.459 [2024-11-26 19:30:51.314907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeee38 00:24:17.459 [2024-11-26 19:30:51.315984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.459 [2024-11-26 19:30:51.316000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.323411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efb8b8 00:24:17.719 [2024-11-26 19:30:51.324456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.324474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.331923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef92c0 00:24:17.719 [2024-11-26 19:30:51.332967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.332985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.340446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efe2e8 00:24:17.719 [2024-11-26 19:30:51.341500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.341517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.348915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee88f8 00:24:17.719 [2024-11-26 19:30:51.349995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.350011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.357407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef4b08 00:24:17.719 [2024-11-26 19:30:51.358457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.358476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.365876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef0788 00:24:17.719 [2024-11-26 19:30:51.366931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.366948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.374437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eed0b0 00:24:17.719 [2024-11-26 19:30:51.375490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.375507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.382915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ede8a8 00:24:17.719 [2024-11-26 19:30:51.383969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.383985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.391404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eeee38 00:24:17.719 [2024-11-26 19:30:51.392463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.392482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.399878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efb8b8 00:24:17.719 [2024-11-26 19:30:51.400933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.400948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.408347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef92c0 00:24:17.719 [2024-11-26 19:30:51.409399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.409415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.416310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efcdd0 00:24:17.719 [2024-11-26 19:30:51.417350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.417367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.425117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eee190 00:24:17.719 [2024-11-26 19:30:51.425860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.425876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.433762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef5be8 00:24:17.719 [2024-11-26 19:30:51.434691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.719 [2024-11-26 19:30:51.434708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:17.719 [2024-11-26 19:30:51.442483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eedd58 00:24:17.720 [2024-11-26 19:30:51.443289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.443306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.451098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016efeb58 00:24:17.720 [2024-11-26 19:30:51.452157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.452174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.459567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee2c28 00:24:17.720 [2024-11-26 19:30:51.460618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.460633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.468044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef1ca0 00:24:17.720 [2024-11-26 19:30:51.469116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.469134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.476519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5220 00:24:17.720 [2024-11-26 19:30:51.477566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.477585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.484973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee1b48 00:24:17.720 [2024-11-26 19:30:51.486020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.486038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.493424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ef6cc8 00:24:17.720 [2024-11-26 19:30:51.494470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.494488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.501880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eea248 00:24:17.720 [2024-11-26 19:30:51.502917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.502937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.510342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016ee5ec8 00:24:17.720 [2024-11-26 19:30:51.511361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.511379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 [2024-11-26 19:30:51.518791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6a9d0) with pdu=0x200016eef6a8 00:24:17.720 30042.00 IOPS, 117.35 MiB/s [2024-11-26T18:30:51.585Z] [2024-11-26 19:30:51.519821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.720 [2024-11-26 19:30:51.519838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:17.720 00:24:17.720 Latency(us) 00:24:17.720 [2024-11-26T18:30:51.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:17.720 nvme0n1 : 2.00 30063.18 117.43 0.00 0.00 4253.06 1727.15 10758.83 00:24:17.720 [2024-11-26T18:30:51.585Z] =================================================================================================================== 00:24:17.720 [2024-11-26T18:30:51.585Z] Total : 30063.18 117.43 0.00 0.00 4253.06 1727.15 10758.83 00:24:17.720 { 00:24:17.720 "results": [ 00:24:17.720 { 00:24:17.720 "job": "nvme0n1", 00:24:17.720 "core_mask": "0x2", 00:24:17.720 "workload": "randwrite", 00:24:17.720 "status": "finished", 00:24:17.720 "queue_depth": 128, 00:24:17.720 "io_size": 4096, 00:24:17.720 "runtime": 2.002849, 00:24:17.720 "iops": 30063.17500720224, 00:24:17.720 "mibps": 117.43427737188375, 00:24:17.720 "io_failed": 0, 00:24:17.720 "io_timeout": 0, 00:24:17.720 "avg_latency_us": 4253.057777187271, 00:24:17.720 "min_latency_us": 1727.1466666666668, 00:24:17.720 "max_latency_us": 10758.826666666666 00:24:17.720 } 00:24:17.720 ], 00:24:17.720 "core_count": 1 00:24:17.720 } 00:24:17.720 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:17.720 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:17.720 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:17.720 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:17.720 | .driver_specific 00:24:17.720 | .nvme_error 00:24:17.720 | .status_code 00:24:17.720 | .command_transient_transport_error' 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 236 > 0 )) 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3889725 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3889725 ']' 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3889725 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3889725 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3889725' 00:24:17.980 killing process with pid 3889725 00:24:17.980 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3889725 00:24:17.980 Received shutdown signal, test time was about 2.000000 seconds 00:24:17.980 00:24:17.980 Latency(us) 00:24:17.980 [2024-11-26T18:30:51.845Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.980 [2024-11-26T18:30:51.846Z] =================================================================================================================== 00:24:17.981 [2024-11-26T18:30:51.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.981 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3889725 00:24:17.981 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3890403 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3890403 /var/tmp/bperf.sock 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3890403 ']' 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:18.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.241 19:30:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:18.241 [2024-11-26 19:30:51.878788] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:18.241 [2024-11-26 19:30:51.878843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3890403 ] 00:24:18.241 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:18.241 Zero copy mechanism will not be used. 00:24:18.241 [2024-11-26 19:30:51.942670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.241 [2024-11-26 19:30:51.971954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.241 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.241 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:18.241 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:18.241 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:18.499 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:18.499 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.499 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.499 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.499 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.499 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.758 nvme0n1 00:24:18.758 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:18.758 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.758 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.758 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.758 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:18.758 19:30:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:18.758 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:18.758 Zero copy mechanism will not be used. 00:24:18.758 Running I/O for 2 seconds... 00:24:19.019 [2024-11-26 19:30:52.629161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.629235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.629261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.633120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.633172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.633191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.636708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.636760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.636776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.639700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.639754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.639770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.642739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.642786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.642802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.646476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.646693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.646714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.656830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.657106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.657122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.665788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.665990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.666006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.670601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.670866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.670883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.680554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.019 [2024-11-26 19:30:52.680772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.019 [2024-11-26 19:30:52.680788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.019 [2024-11-26 19:30:52.690087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.690388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.690403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.699929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.700173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.700188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.708699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.708922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.708938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.717859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.718055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.718071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.727922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.728188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.728205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.737696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.737894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.737910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.747803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.748068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.748084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.757829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.758204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.758220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.766735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.766964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.766980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.776254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.776418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.776434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.785297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.785536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.785551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.794386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.794639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.794655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.803750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.804149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.804165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.812050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.812344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.812361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.820409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.820534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.820549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.828450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.828741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.828758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.837371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.837634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.837650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.844280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.844509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.844525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.852796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.852986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.853003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.859424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.859601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.859617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.867227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.867432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.867447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.020 [2024-11-26 19:30:52.875980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.020 [2024-11-26 19:30:52.876213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.020 [2024-11-26 19:30:52.876233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.885001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.885453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.885471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.894349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.894560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.894576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.903187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.903571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.903589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.911488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.911659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.911676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.920888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.921206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.921223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.929767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.930123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.930140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.937906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.938078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.938094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.944470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.944704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.944721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.951594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.951753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.951770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.956003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.956183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.956198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.962277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.962508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.281 [2024-11-26 19:30:52.962523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.281 [2024-11-26 19:30:52.970596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.281 [2024-11-26 19:30:52.970836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:52.970852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:52.979815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:52.980010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:52.980025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:52.988990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:52.989220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:52.989236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:52.998884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:52.999093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:52.999113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.007614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.007805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.007820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.016521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.016758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.016773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.025840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.026030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.026045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.034350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.034570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.034585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.042423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.042680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.042695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.051277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.051523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.051538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.060144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.060395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.060410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.069065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.069116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.069131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.077419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.077700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.077723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.085575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.085792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.085808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.094419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.094649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.094667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.102814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.103017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.103033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.111170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.111436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.111452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.119423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.119636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.119651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.128201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.128417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.128432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.282 [2024-11-26 19:30:53.136881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.282 [2024-11-26 19:30:53.137117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.282 [2024-11-26 19:30:53.137133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.146835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.146992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.147007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.155747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.155918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.155933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.164351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.164538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.164553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.172735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.172920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.172940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.182204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.182430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.182445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.191791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.191862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.191877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.200833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.200924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.200940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.209945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.210090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.210110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.219808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.220029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.220045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.229706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.229922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.229937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.240448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.240627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.240642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.250203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.250366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.250382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.257682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.257912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.257928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.265618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.265878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.265893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.273556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.273768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.273783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.281324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.281544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.281559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.284229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.284352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.284367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.287671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.287716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.287732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.295638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.295882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.295897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.303817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.304033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.304049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.309199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.309440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.309456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.317930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.318151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.318167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.326358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.544 [2024-11-26 19:30:53.326623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.544 [2024-11-26 19:30:53.326638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.544 [2024-11-26 19:30:53.334910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.334965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.334981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.341549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.341763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.341778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.350093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.350143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.350158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.358271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.358457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.358472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.363870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.364094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.364114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.372056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.372255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.372270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.378833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.379057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.379075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.387603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.387784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.387799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.395341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.395380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.395396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.545 [2024-11-26 19:30:53.404008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.545 [2024-11-26 19:30:53.404215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.545 [2024-11-26 19:30:53.404231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.410999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.411057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.411072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.417399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.417452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.417467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.420098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.420180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.420196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.426537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.426735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.426750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.436450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.436646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.436662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.444803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.445023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.445038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.452484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.452717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.452732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.461301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.461474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.461489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.470322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.470451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.470466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.478162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.478406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.478421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.486489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.486667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.486682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.495566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.495843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.495858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.503997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.504222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.504237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.512668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.513075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.513091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.521847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.522057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.522072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.529557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.529778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.529794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.538147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.538270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.538285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.806 [2024-11-26 19:30:53.547649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.806 [2024-11-26 19:30:53.547836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.806 [2024-11-26 19:30:53.547852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.557683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.557906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.557923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.567418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.567609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.567624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.577120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.577370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.577385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.587114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.587350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.587367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.595440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.595592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.595610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.604062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.604272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.604287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.611847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.612064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.612079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.619966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.620185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.620200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.807 3759.00 IOPS, 469.88 MiB/s [2024-11-26T18:30:53.672Z] [2024-11-26 19:30:53.628196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.628406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.628421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.636454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.636678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.636694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.644741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.644965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.644981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.653068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.653293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.653308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.659156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.659344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.659359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:19.807 [2024-11-26 19:30:53.668744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:19.807 [2024-11-26 19:30:53.668925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.807 [2024-11-26 19:30:53.668940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.678329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.678554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.678570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.688426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.688652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.688668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.697924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.698129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.698145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.707836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.708088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.708109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.717648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.717822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.717837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.727056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.727298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.727314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.737068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.737323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.737339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.746588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.746800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.746815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.756149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.756349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.756365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.765503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.765638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.765654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.775116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.775253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.775268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.784291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.784498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.784513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.793811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.794026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.794042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.802992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.803184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.803199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.811214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.811483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.811500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.818229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.818272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.069 [2024-11-26 19:30:53.818287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.069 [2024-11-26 19:30:53.824527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.069 [2024-11-26 19:30:53.824683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.824701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.832136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.832176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.832191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.839424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.839611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.839627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.848113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.848303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.848318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.856336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.856568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.856584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.864748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.864881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.864896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.871851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.871890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.871906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.879551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.879739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.879754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.888589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.888818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.888832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.898026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.898222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.898238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.906385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.906440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.906456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.913029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.913157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.913172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.922203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.922371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.922385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.070 [2024-11-26 19:30:53.931099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.070 [2024-11-26 19:30:53.931343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.070 [2024-11-26 19:30:53.931358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.332 [2024-11-26 19:30:53.940668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.332 [2024-11-26 19:30:53.940860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.332 [2024-11-26 19:30:53.940875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.332 [2024-11-26 19:30:53.950602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.332 [2024-11-26 19:30:53.950797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.332 [2024-11-26 19:30:53.950812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.332 [2024-11-26 19:30:53.960656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.332 [2024-11-26 19:30:53.960908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.332 [2024-11-26 19:30:53.960924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.332 [2024-11-26 19:30:53.970184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.332 [2024-11-26 19:30:53.970426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.332 [2024-11-26 19:30:53.970442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.332 [2024-11-26 19:30:53.975261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.332 [2024-11-26 19:30:53.975509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.332 [2024-11-26 19:30:53.975524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.332 [2024-11-26 19:30:53.984537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.332 [2024-11-26 19:30:53.984744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.332 [2024-11-26 19:30:53.984759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.332 [2024-11-26 19:30:53.993603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.332 [2024-11-26 19:30:53.993786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.332 [2024-11-26 19:30:53.993801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.332 [2024-11-26 19:30:54.001831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.332 [2024-11-26 19:30:54.002124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.332 [2024-11-26 19:30:54.002139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.010490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.010666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.010681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.017672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.017826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.017842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.022033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.022077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.022092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.025893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.025992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.026007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.034954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.035148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.035166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.043933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.044164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.044179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.048320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.048359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.048374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.053055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.053118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.053133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.055525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.055577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.055592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.058456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.058498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.058513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.061079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.061134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.061149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.063793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.063927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.063942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.068391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.068431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.068446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.076213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.076414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.076429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.084645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.084685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.084701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.089598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.089639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.089655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.097997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.098049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.098064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.105549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.105747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.105763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.111531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.111571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.111587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.117118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.117170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.117186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.124968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.125191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.125207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.131280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.131339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.131354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.136914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.136971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.136986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.139848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.139943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.139958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.144327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.144532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.144547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.154019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.154246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.154262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.163754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.333 [2024-11-26 19:30:54.163961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.333 [2024-11-26 19:30:54.163978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.333 [2024-11-26 19:30:54.173533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.334 [2024-11-26 19:30:54.173586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.334 [2024-11-26 19:30:54.173601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.334 [2024-11-26 19:30:54.178400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.334 [2024-11-26 19:30:54.178611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.334 [2024-11-26 19:30:54.178627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.334 [2024-11-26 19:30:54.182324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.334 [2024-11-26 19:30:54.182376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.334 [2024-11-26 19:30:54.182391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.334 [2024-11-26 19:30:54.186818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.334 [2024-11-26 19:30:54.186861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.334 [2024-11-26 19:30:54.186882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.334 [2024-11-26 19:30:54.189820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.334 [2024-11-26 19:30:54.189884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.334 [2024-11-26 19:30:54.189899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.334 [2024-11-26 19:30:54.194947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.596 [2024-11-26 19:30:54.195122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.596 [2024-11-26 19:30:54.195138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.596 [2024-11-26 19:30:54.204641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.596 [2024-11-26 19:30:54.204847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.596 [2024-11-26 19:30:54.204862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.596 [2024-11-26 19:30:54.214905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.596 [2024-11-26 19:30:54.215150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.596 [2024-11-26 19:30:54.215166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.596 [2024-11-26 19:30:54.224702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.596 [2024-11-26 19:30:54.224909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.596 [2024-11-26 19:30:54.224924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.596 [2024-11-26 19:30:54.235418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.596 [2024-11-26 19:30:54.235679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.235695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.244874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.245138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.245154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.252315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.252355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.252371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.260081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.260279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.260294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.266091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.266137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.266153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.271050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.271223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.271238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.277777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.277826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.277840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.282756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.282960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.282975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.287660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.287698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.287713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.292512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.292552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.292568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.297614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.297651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.297666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.305241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.305493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.305508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.312043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.312085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.312105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.316801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.316842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.316857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.321454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.321495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.321510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.327428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.327640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.327655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.332587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.332636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.332651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.335727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.335810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.335825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.339074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.339128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.339143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.347027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.347067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.347082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.350108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.350155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.350173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.352534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.352575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.352590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.354969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.355011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.355026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.357399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.357445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.357460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.359806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.359856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.359871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.597 [2024-11-26 19:30:54.362259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.597 [2024-11-26 19:30:54.362297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.597 [2024-11-26 19:30:54.362311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.364691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.364730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.364745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.367097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.367154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.367169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.369790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.369878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.369892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.373740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.373934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.373949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.383789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.384004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.384021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.392516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.392716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.392731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.402415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.402651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.402667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.412151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.412351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.412366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.422042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.422207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.422222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.431892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.432121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.432137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.441976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.442186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.442202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.598 [2024-11-26 19:30:54.452490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.598 [2024-11-26 19:30:54.452686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.598 [2024-11-26 19:30:54.452702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.463767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.463972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.463988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.473014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.473053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.473068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.481379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.481595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.481611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.490927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.491123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.491138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.500577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.500772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.500787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.506209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.506259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.506274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.508656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.508704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.508719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.511071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.511118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.511133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.513796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.513861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.513879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.517489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.517697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.517713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.526721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.526954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.526973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.535584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.535787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.535802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.545450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.545567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.545582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.555014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.555121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.555136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.563882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.564071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.564086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.573627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.573809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.573825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.583669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.583864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.583879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.594107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.594326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.859 [2024-11-26 19:30:54.594345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.859 [2024-11-26 19:30:54.603525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.859 [2024-11-26 19:30:54.603577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.860 [2024-11-26 19:30:54.603592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.860 [2024-11-26 19:30:54.610180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.860 [2024-11-26 19:30:54.610382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.860 [2024-11-26 19:30:54.610397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.860 [2024-11-26 19:30:54.615310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.860 [2024-11-26 19:30:54.615348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.860 [2024-11-26 19:30:54.615363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:20.860 [2024-11-26 19:30:54.618447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.860 [2024-11-26 19:30:54.618486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.860 [2024-11-26 19:30:54.618501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:20.860 [2024-11-26 19:30:54.620890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.860 [2024-11-26 19:30:54.620938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.860 [2024-11-26 19:30:54.620953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:20.860 [2024-11-26 19:30:54.623339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xa6af00) with pdu=0x200016eff3c8 00:24:20.860 [2024-11-26 19:30:54.623380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.860 [2024-11-26 19:30:54.623395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:20.860 4083.50 IOPS, 510.44 MiB/s 00:24:20.860 Latency(us) 00:24:20.860 [2024-11-26T18:30:54.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.860 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:20.860 nvme0n1 : 2.00 4087.04 510.88 0.00 0.00 3911.18 1167.36 11578.03 00:24:20.860 [2024-11-26T18:30:54.725Z] =================================================================================================================== 00:24:20.860 [2024-11-26T18:30:54.725Z] Total : 4087.04 510.88 0.00 0.00 3911.18 1167.36 11578.03 00:24:20.860 { 00:24:20.860 "results": [ 00:24:20.860 { 00:24:20.860 "job": "nvme0n1", 00:24:20.860 "core_mask": "0x2", 00:24:20.860 "workload": "randwrite", 00:24:20.860 "status": "finished", 00:24:20.860 "queue_depth": 16, 00:24:20.860 "io_size": 131072, 00:24:20.860 "runtime": 2.002184, 00:24:20.860 "iops": 4087.036955644436, 00:24:20.860 "mibps": 510.8796194555545, 00:24:20.860 "io_failed": 0, 00:24:20.860 "io_timeout": 0, 00:24:20.860 "avg_latency_us": 3911.1811185791685, 00:24:20.860 "min_latency_us": 1167.36, 00:24:20.860 "max_latency_us": 11578.026666666667 00:24:20.860 } 00:24:20.860 ], 00:24:20.860 "core_count": 1 00:24:20.860 } 00:24:20.860 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:20.860 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:20.860 | .driver_specific 00:24:20.860 | .nvme_error 00:24:20.860 | .status_code 00:24:20.860 | .command_transient_transport_error' 00:24:20.860 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:20.860 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 264 > 0 )) 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3890403 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3890403 ']' 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3890403 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3890403 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3890403' 00:24:21.120 killing process with pid 3890403 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3890403 00:24:21.120 Received shutdown signal, test time was about 2.000000 seconds 00:24:21.120 00:24:21.120 Latency(us) 00:24:21.120 [2024-11-26T18:30:54.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.120 [2024-11-26T18:30:54.985Z] =================================================================================================================== 00:24:21.120 [2024-11-26T18:30:54.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3890403 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3888350 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3888350 ']' 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3888350 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.120 19:30:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3888350 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3888350' 00:24:21.378 killing process with pid 3888350 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3888350 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3888350 00:24:21.378 00:24:21.378 real 0m12.794s 00:24:21.378 user 0m25.442s 00:24:21.378 sys 0m2.747s 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.378 ************************************ 00:24:21.378 END TEST nvmf_digest_error 00:24:21.378 ************************************ 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.378 rmmod nvme_tcp 00:24:21.378 rmmod nvme_fabrics 00:24:21.378 rmmod nvme_keyring 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3888350 ']' 00:24:21.378 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3888350 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3888350 ']' 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3888350 00:24:21.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3888350) - No such process 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3888350 is not found' 00:24:21.379 Process with pid 3888350 is not found 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.379 19:30:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.914 00:24:23.914 real 0m34.969s 00:24:23.914 user 0m55.131s 00:24:23.914 sys 0m9.744s 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.914 ************************************ 00:24:23.914 END TEST nvmf_digest 00:24:23.914 ************************************ 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.914 ************************************ 00:24:23.914 START TEST nvmf_bdevperf 00:24:23.914 ************************************ 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:23.914 * Looking for test storage... 00:24:23.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.914 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.915 --rc genhtml_branch_coverage=1 00:24:23.915 --rc genhtml_function_coverage=1 00:24:23.915 --rc genhtml_legend=1 00:24:23.915 --rc geninfo_all_blocks=1 00:24:23.915 --rc geninfo_unexecuted_blocks=1 00:24:23.915 00:24:23.915 ' 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.915 --rc genhtml_branch_coverage=1 00:24:23.915 --rc genhtml_function_coverage=1 00:24:23.915 --rc genhtml_legend=1 00:24:23.915 --rc geninfo_all_blocks=1 00:24:23.915 --rc geninfo_unexecuted_blocks=1 00:24:23.915 00:24:23.915 ' 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.915 --rc genhtml_branch_coverage=1 00:24:23.915 --rc genhtml_function_coverage=1 00:24:23.915 --rc genhtml_legend=1 00:24:23.915 --rc geninfo_all_blocks=1 00:24:23.915 --rc geninfo_unexecuted_blocks=1 00:24:23.915 00:24:23.915 ' 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:23.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.915 --rc genhtml_branch_coverage=1 00:24:23.915 --rc genhtml_function_coverage=1 00:24:23.915 --rc genhtml_legend=1 00:24:23.915 --rc geninfo_all_blocks=1 00:24:23.915 --rc geninfo_unexecuted_blocks=1 00:24:23.915 00:24:23.915 ' 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.915 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.916 19:30:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:29.192 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:29.192 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.192 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:29.193 Found net devices under 0000:31:00.0: cvl_0_0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:29.193 Found net devices under 0000:31:00.1: cvl_0_1 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:29.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:24:29.193 00:24:29.193 --- 10.0.0.2 ping statistics --- 00:24:29.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.193 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:24:29.193 00:24:29.193 --- 10.0.0.1 ping statistics --- 00:24:29.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.193 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3895429 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3895429 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3895429 ']' 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.193 [2024-11-26 19:31:02.690337] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:29.193 [2024-11-26 19:31:02.690383] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.193 [2024-11-26 19:31:02.764131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:29.193 [2024-11-26 19:31:02.793870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.193 [2024-11-26 19:31:02.793898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.193 [2024-11-26 19:31:02.793905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.193 [2024-11-26 19:31:02.793910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.193 [2024-11-26 19:31:02.793914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.193 [2024-11-26 19:31:02.795035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.193 [2024-11-26 19:31:02.795188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:29.193 [2024-11-26 19:31:02.795322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.193 [2024-11-26 19:31:02.902853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.193 Malloc0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:29.193 [2024-11-26 19:31:02.951803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:29.193 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:29.193 { 00:24:29.193 "params": { 00:24:29.193 "name": "Nvme$subsystem", 00:24:29.193 "trtype": "$TEST_TRANSPORT", 00:24:29.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:29.193 "adrfam": "ipv4", 00:24:29.193 "trsvcid": "$NVMF_PORT", 00:24:29.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:29.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:29.193 "hdgst": ${hdgst:-false}, 00:24:29.193 "ddgst": ${ddgst:-false} 00:24:29.193 }, 00:24:29.194 "method": "bdev_nvme_attach_controller" 00:24:29.194 } 00:24:29.194 EOF 00:24:29.194 )") 00:24:29.194 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:24:29.194 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:24:29.194 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:24:29.194 19:31:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:29.194 "params": { 00:24:29.194 "name": "Nvme1", 00:24:29.194 "trtype": "tcp", 00:24:29.194 "traddr": "10.0.0.2", 00:24:29.194 "adrfam": "ipv4", 00:24:29.194 "trsvcid": "4420", 00:24:29.194 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:29.194 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:29.194 "hdgst": false, 00:24:29.194 "ddgst": false 00:24:29.194 }, 00:24:29.194 "method": "bdev_nvme_attach_controller" 00:24:29.194 }' 00:24:29.194 [2024-11-26 19:31:02.989951] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:29.194 [2024-11-26 19:31:02.989997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895472 ] 00:24:29.452 [2024-11-26 19:31:03.066605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.452 [2024-11-26 19:31:03.102588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.712 Running I/O for 1 seconds... 00:24:30.649 11781.00 IOPS, 46.02 MiB/s 00:24:30.649 Latency(us) 00:24:30.649 [2024-11-26T18:31:04.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.649 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:30.649 Verification LBA range: start 0x0 length 0x4000 00:24:30.649 Nvme1n1 : 1.01 11815.48 46.15 0.00 0.00 10780.62 2457.60 11250.35 00:24:30.649 [2024-11-26T18:31:04.514Z] =================================================================================================================== 00:24:30.649 [2024-11-26T18:31:04.514Z] Total : 11815.48 46.15 0.00 0.00 10780.62 2457.60 11250.35 00:24:30.649 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:30.649 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3895817 00:24:30.649 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:30.649 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:30.649 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:24:30.649 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:24:30.649 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:30.649 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:30.649 { 00:24:30.649 "params": { 00:24:30.649 "name": "Nvme$subsystem", 00:24:30.649 "trtype": "$TEST_TRANSPORT", 00:24:30.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:30.649 "adrfam": "ipv4", 00:24:30.649 "trsvcid": "$NVMF_PORT", 00:24:30.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:30.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:30.649 "hdgst": ${hdgst:-false}, 00:24:30.649 "ddgst": ${ddgst:-false} 00:24:30.649 }, 00:24:30.649 "method": "bdev_nvme_attach_controller" 00:24:30.649 } 00:24:30.649 EOF 00:24:30.649 )") 00:24:30.650 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:24:30.650 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:24:30.650 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:24:30.650 19:31:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:30.650 "params": { 00:24:30.650 "name": "Nvme1", 00:24:30.650 "trtype": "tcp", 00:24:30.650 "traddr": "10.0.0.2", 00:24:30.650 "adrfam": "ipv4", 00:24:30.650 "trsvcid": "4420", 00:24:30.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:30.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:30.650 "hdgst": false, 00:24:30.650 "ddgst": false 00:24:30.650 }, 00:24:30.650 "method": "bdev_nvme_attach_controller" 00:24:30.650 }' 00:24:30.909 [2024-11-26 19:31:04.527237] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:30.909 [2024-11-26 19:31:04.527291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895817 ] 00:24:30.909 [2024-11-26 19:31:04.605242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.909 [2024-11-26 19:31:04.641376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.167 Running I/O for 15 seconds... 00:24:33.047 11394.00 IOPS, 44.51 MiB/s [2024-11-26T18:31:07.855Z] 11677.00 IOPS, 45.61 MiB/s [2024-11-26T18:31:07.855Z] 19:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3895429 00:24:33.990 19:31:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:33.990 [2024-11-26 19:31:07.509464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.990 [2024-11-26 19:31:07.509855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.990 [2024-11-26 19:31:07.509861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.509993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.509998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.991 [2024-11-26 19:31:07.510330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.991 [2024-11-26 19:31:07.510337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.992 [2024-11-26 19:31:07.510694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.992 [2024-11-26 19:31:07.510800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.992 [2024-11-26 19:31:07.510806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:33.993 [2024-11-26 19:31:07.510977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.510989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.510995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.511000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.511007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.511012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.511018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.511023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.511030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.511035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.511042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.993 [2024-11-26 19:31:07.511047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.511053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205c970 is same with the state(6) to be set 00:24:33.993 [2024-11-26 19:31:07.511060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:33.993 [2024-11-26 19:31:07.511065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:33.993 [2024-11-26 19:31:07.511069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124848 len:8 PRP1 0x0 PRP2 0x0 00:24:33.993 [2024-11-26 19:31:07.511074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.993 [2024-11-26 19:31:07.513538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.993 [2024-11-26 19:31:07.513581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.993 [2024-11-26 19:31:07.514381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.993 [2024-11-26 19:31:07.514413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.993 [2024-11-26 19:31:07.514422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.993 [2024-11-26 19:31:07.514593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.993 [2024-11-26 19:31:07.514750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.993 [2024-11-26 19:31:07.514757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.993 [2024-11-26 19:31:07.514765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.993 [2024-11-26 19:31:07.514772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.993 [2024-11-26 19:31:07.526455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.993 [2024-11-26 19:31:07.527027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.993 [2024-11-26 19:31:07.527058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.993 [2024-11-26 19:31:07.527066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.993 [2024-11-26 19:31:07.527242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.993 [2024-11-26 19:31:07.527396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.993 [2024-11-26 19:31:07.527402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.993 [2024-11-26 19:31:07.527408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.993 [2024-11-26 19:31:07.527415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.993 [2024-11-26 19:31:07.539102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.993 [2024-11-26 19:31:07.539670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.993 [2024-11-26 19:31:07.539700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.993 [2024-11-26 19:31:07.539709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.993 [2024-11-26 19:31:07.539875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.993 [2024-11-26 19:31:07.540027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.993 [2024-11-26 19:31:07.540034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.993 [2024-11-26 19:31:07.540039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.993 [2024-11-26 19:31:07.540045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.993 [2024-11-26 19:31:07.551739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.993 [2024-11-26 19:31:07.552331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.993 [2024-11-26 19:31:07.552362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.993 [2024-11-26 19:31:07.552371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.993 [2024-11-26 19:31:07.552536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.993 [2024-11-26 19:31:07.552689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.993 [2024-11-26 19:31:07.552696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.993 [2024-11-26 19:31:07.552704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.993 [2024-11-26 19:31:07.552710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.993 [2024-11-26 19:31:07.564373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.564856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.564872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.564877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.565027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.565183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.565189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.994 [2024-11-26 19:31:07.565194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.994 [2024-11-26 19:31:07.565199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.994 [2024-11-26 19:31:07.576988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.577546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.577577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.577585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.577751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.577904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.577910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.994 [2024-11-26 19:31:07.577916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.994 [2024-11-26 19:31:07.577922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.994 [2024-11-26 19:31:07.589584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.590086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.590106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.590112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.590261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.590411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.590416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.994 [2024-11-26 19:31:07.590422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.994 [2024-11-26 19:31:07.590427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.994 [2024-11-26 19:31:07.602249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.602736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.602749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.602755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.602904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.603053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.603059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.994 [2024-11-26 19:31:07.603063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.994 [2024-11-26 19:31:07.603068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.994 [2024-11-26 19:31:07.614884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.615334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.615364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.615374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.615543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.615696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.615702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.994 [2024-11-26 19:31:07.615708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.994 [2024-11-26 19:31:07.615714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.994 [2024-11-26 19:31:07.627545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.628155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.628186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.628194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.628360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.628513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.628519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.994 [2024-11-26 19:31:07.628525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.994 [2024-11-26 19:31:07.628531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.994 [2024-11-26 19:31:07.640209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.640671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.640686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.640695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.640845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.640994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.641000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.994 [2024-11-26 19:31:07.641005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.994 [2024-11-26 19:31:07.641010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.994 [2024-11-26 19:31:07.652817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.653399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.653431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.653440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.653608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.653761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.653767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.994 [2024-11-26 19:31:07.653773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.994 [2024-11-26 19:31:07.653779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.994 [2024-11-26 19:31:07.665445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.994 [2024-11-26 19:31:07.666041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.994 [2024-11-26 19:31:07.666073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.994 [2024-11-26 19:31:07.666084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.994 [2024-11-26 19:31:07.666256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.994 [2024-11-26 19:31:07.666410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.994 [2024-11-26 19:31:07.666416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.666421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.666427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.678104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.678576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.678590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.678596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.678746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.678899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.678905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.678910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.678915] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.690728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.691220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.691252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.691261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.691429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.691581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.691588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.691593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.691599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.703413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.703875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.703905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.703914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.704080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.704239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.704246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.704252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.704258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.716063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.716605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.716636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.716645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.716810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.716962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.716968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.716977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.716983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.728791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.729408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.729438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.729447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.729614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.729767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.729773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.729778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.729784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.741488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.741988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.742003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.742008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.742162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.742312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.742317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.742323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.742327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.754131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.754697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.754727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.754736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.754901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.755054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.755060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.755066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.755071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.766739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.767280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.767295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.767301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.767451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.767601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.767607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.767612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.767617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.779422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.779876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.779888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.779894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.780042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.995 [2024-11-26 19:31:07.780196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.995 [2024-11-26 19:31:07.780202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.995 [2024-11-26 19:31:07.780207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.995 [2024-11-26 19:31:07.780211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.995 [2024-11-26 19:31:07.792009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.995 [2024-11-26 19:31:07.792466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.995 [2024-11-26 19:31:07.792479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.995 [2024-11-26 19:31:07.792485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.995 [2024-11-26 19:31:07.792634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.996 [2024-11-26 19:31:07.792783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.996 [2024-11-26 19:31:07.792789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.996 [2024-11-26 19:31:07.792794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.996 [2024-11-26 19:31:07.792799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.996 [2024-11-26 19:31:07.804608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.996 [2024-11-26 19:31:07.805105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.996 [2024-11-26 19:31:07.805118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.996 [2024-11-26 19:31:07.805126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.996 [2024-11-26 19:31:07.805276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.996 [2024-11-26 19:31:07.805425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.996 [2024-11-26 19:31:07.805431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.996 [2024-11-26 19:31:07.805436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.996 [2024-11-26 19:31:07.805441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.996 [2024-11-26 19:31:07.817238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.996 [2024-11-26 19:31:07.817594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.996 [2024-11-26 19:31:07.817606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.996 [2024-11-26 19:31:07.817611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.996 [2024-11-26 19:31:07.817760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.996 [2024-11-26 19:31:07.817909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.996 [2024-11-26 19:31:07.817914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.996 [2024-11-26 19:31:07.817919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.996 [2024-11-26 19:31:07.817924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.996 [2024-11-26 19:31:07.829868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.996 [2024-11-26 19:31:07.830231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.996 [2024-11-26 19:31:07.830261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.996 [2024-11-26 19:31:07.830270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.996 [2024-11-26 19:31:07.830438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.996 [2024-11-26 19:31:07.830591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.996 [2024-11-26 19:31:07.830597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.996 [2024-11-26 19:31:07.830602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.996 [2024-11-26 19:31:07.830608] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:33.996 10653.67 IOPS, 41.62 MiB/s [2024-11-26T18:31:07.861Z] [2024-11-26 19:31:07.843717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:33.996 [2024-11-26 19:31:07.844330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:33.996 [2024-11-26 19:31:07.844362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:33.996 [2024-11-26 19:31:07.844371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:33.996 [2024-11-26 19:31:07.844543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:33.996 [2024-11-26 19:31:07.844695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:33.996 [2024-11-26 19:31:07.844701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:33.996 [2024-11-26 19:31:07.844707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:33.996 [2024-11-26 19:31:07.844712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.257 [2024-11-26 19:31:07.856420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.257 [2024-11-26 19:31:07.856983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.257 [2024-11-26 19:31:07.857013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.257 [2024-11-26 19:31:07.857022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.257 [2024-11-26 19:31:07.857192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.857346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.857353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.857358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.857364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.869022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.869380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.869396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.869402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.869553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.869702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.869709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.869714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.869719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.881668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.882213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.882243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.882252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.882420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.882574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.882580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.882589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.882595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.894271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.894827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.894857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.894866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.895034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.895195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.895202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.895207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.895213] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.906881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.907517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.907548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.907557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.907723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.907875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.907882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.907887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.907893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.919569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.920024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.920039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.920045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.920199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.920349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.920355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.920360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.920366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.932177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.932744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.932774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.932783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.932948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.933116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.933124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.933129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.933135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.944809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.945386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.945417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.945426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.945591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.945745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.945752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.945758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.945764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.957432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.958026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.958057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.958067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.958242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.958397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.958404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.958409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.958415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.970072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.970570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.970590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.970596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.258 [2024-11-26 19:31:07.970746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.258 [2024-11-26 19:31:07.970896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.258 [2024-11-26 19:31:07.970903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.258 [2024-11-26 19:31:07.970908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.258 [2024-11-26 19:31:07.970913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.258 [2024-11-26 19:31:07.982716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.258 [2024-11-26 19:31:07.983125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.258 [2024-11-26 19:31:07.983139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.258 [2024-11-26 19:31:07.983145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:07.983296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:07.983447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:07.983453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:07.983458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:07.983463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:07.995410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:07.995962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:07.995994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:07.996002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:07.996174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:07.996328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:07.996336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:07.996342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:07.996347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.008014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.008505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.008521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:08.008527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:08.008680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:08.008831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:08.008838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:08.008843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:08.008848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.020665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.021122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.021139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:08.021145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:08.021295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:08.021445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:08.021451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:08.021457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:08.021462] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.033269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.033851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.033884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:08.033893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:08.034058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:08.034227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:08.034236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:08.034242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:08.034248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.045957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.046521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.046553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:08.046562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:08.046728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:08.046881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:08.046888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:08.046898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:08.046904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.058601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.059199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.059232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:08.059241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:08.059407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:08.059560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:08.059567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:08.059573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:08.059579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.071260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.071853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.071885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:08.071895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:08.072062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:08.072223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:08.072231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:08.072237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:08.072244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.083920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.084395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.084412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:08.084418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:08.084568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:08.084719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:08.084726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:08.084732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:08.084737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.096553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.096995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.097009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.259 [2024-11-26 19:31:08.097015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.259 [2024-11-26 19:31:08.097169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.259 [2024-11-26 19:31:08.097320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.259 [2024-11-26 19:31:08.097326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.259 [2024-11-26 19:31:08.097331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.259 [2024-11-26 19:31:08.097336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.259 [2024-11-26 19:31:08.109143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.259 [2024-11-26 19:31:08.109627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.259 [2024-11-26 19:31:08.109641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.260 [2024-11-26 19:31:08.109648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.260 [2024-11-26 19:31:08.109798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.260 [2024-11-26 19:31:08.109948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.260 [2024-11-26 19:31:08.109955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.260 [2024-11-26 19:31:08.109961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.260 [2024-11-26 19:31:08.109966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.520 [2024-11-26 19:31:08.121775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.520 [2024-11-26 19:31:08.122160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.520 [2024-11-26 19:31:08.122174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.520 [2024-11-26 19:31:08.122179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.520 [2024-11-26 19:31:08.122329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.520 [2024-11-26 19:31:08.122480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.520 [2024-11-26 19:31:08.122487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.520 [2024-11-26 19:31:08.122493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.520 [2024-11-26 19:31:08.122499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.520 [2024-11-26 19:31:08.134443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.520 [2024-11-26 19:31:08.134891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.520 [2024-11-26 19:31:08.134909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.520 [2024-11-26 19:31:08.134916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.520 [2024-11-26 19:31:08.135066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.520 [2024-11-26 19:31:08.135228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.135235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.135241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.135246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.147052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.147526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.147539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.147545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.147695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.147845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.147852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.147857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.147862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.159663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.160118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.160132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.160137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.160287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.160437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.160444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.160449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.160454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.172253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.172830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.172862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.172871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.173040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.173202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.173210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.173216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.173222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.184894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.185389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.185421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.185430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.185597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.185751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.185758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.185765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.185771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.197583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.198191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.198223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.198232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.198399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.198553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.198560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.198567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.198574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.210243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.210843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.210875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.210884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.211051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.211212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.211221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.211232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.211238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.222900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.223452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.223484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.223492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.223658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.223811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.223818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.223824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.223831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.235509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.236115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.236146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.236155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.236322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.236475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.236482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.236488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.236495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.248169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.248770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.248802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.248811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.248976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.249135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.521 [2024-11-26 19:31:08.249143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.521 [2024-11-26 19:31:08.249149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.521 [2024-11-26 19:31:08.249155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.521 [2024-11-26 19:31:08.260815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.521 [2024-11-26 19:31:08.261412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.521 [2024-11-26 19:31:08.261444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.521 [2024-11-26 19:31:08.261453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.521 [2024-11-26 19:31:08.261619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.521 [2024-11-26 19:31:08.261772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.261779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.261785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.261791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.273456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.274054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.274086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.274094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.274267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.274422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.274430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.274436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.274443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.286115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.286711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.286743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.286752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.286918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.287072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.287079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.287085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.287092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.298756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.299383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.299419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.299427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.299593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.299746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.299753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.299760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.299766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.311427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.311987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.312018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.312028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.312201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.312355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.312363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.312368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.312374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.324041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.324596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.324628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.324637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.324803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.324957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.324964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.324971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.324978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.336649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.337201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.337233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.337242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.337410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.337567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.337575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.337581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.337587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.349264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.349850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.349881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.349890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.350056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.350217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.350226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.350232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.350238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.361896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.362500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.362532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.362540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.362706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.362859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.362866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.362873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.362879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.522 [2024-11-26 19:31:08.374546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.522 [2024-11-26 19:31:08.374965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.522 [2024-11-26 19:31:08.374981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.522 [2024-11-26 19:31:08.374988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.522 [2024-11-26 19:31:08.375144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.522 [2024-11-26 19:31:08.375295] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.522 [2024-11-26 19:31:08.375302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.522 [2024-11-26 19:31:08.375311] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.522 [2024-11-26 19:31:08.375316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.783 [2024-11-26 19:31:08.387253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.783 [2024-11-26 19:31:08.387837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.783 [2024-11-26 19:31:08.387869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.783 [2024-11-26 19:31:08.387877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.783 [2024-11-26 19:31:08.388044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.783 [2024-11-26 19:31:08.388206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.783 [2024-11-26 19:31:08.388214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.783 [2024-11-26 19:31:08.388220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.783 [2024-11-26 19:31:08.388227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.783 [2024-11-26 19:31:08.399885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.783 [2024-11-26 19:31:08.400474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.783 [2024-11-26 19:31:08.400506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.783 [2024-11-26 19:31:08.400515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.783 [2024-11-26 19:31:08.400681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.783 [2024-11-26 19:31:08.400835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.783 [2024-11-26 19:31:08.400842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.400848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.400854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.412509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.413130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.413169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.413177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.413343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.413496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.413504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.413509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.413515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.425186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.425782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.425813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.425822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.425988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.426148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.426156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.426162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.426168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.437838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.438423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.438455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.438464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.438632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.438785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.438792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.438798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.438805] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.450494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.451091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.451131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.451141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.451310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.451464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.451472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.451477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.451483] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.463156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.463756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.463788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.463800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.463965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.464125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.464133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.464139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.464145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.475802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.476278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.476294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.476300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.476450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.476600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.476607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.476612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.476617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.488423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.488985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.489017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.489025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.489199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.489353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.489360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.489366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.489372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.501073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.501671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.501703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.501713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.501878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.502036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.502043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.502050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.502056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.513720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.514370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.514403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.514412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.514577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.514731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.514739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.784 [2024-11-26 19:31:08.514745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.784 [2024-11-26 19:31:08.514752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.784 [2024-11-26 19:31:08.526415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.784 [2024-11-26 19:31:08.526871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.784 [2024-11-26 19:31:08.526887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.784 [2024-11-26 19:31:08.526893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.784 [2024-11-26 19:31:08.527042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.784 [2024-11-26 19:31:08.527199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.784 [2024-11-26 19:31:08.527207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.527213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.527219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.539028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.539603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.539635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.539644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.539810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.539972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.539980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.539989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.539996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.551746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.552296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.552328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.552337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.552504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.552658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.552666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.552672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.552678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.564343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.564806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.564823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.564829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.564980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.565135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.565142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.565148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.565153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.576953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.577417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.577431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.577436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.577586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.577736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.577743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.577749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.577753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.589567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.590056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.590069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.590075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.590228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.590379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.590386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.590392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.590396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.602193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.602631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.602644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.602650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.602799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.602949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.602956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.602961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.602966] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.614906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.615382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.615395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.615401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.615550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.615700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.615706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.615712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.615717] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.627511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.628004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.628017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.628025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.628181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.628332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.628338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.628343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.628349] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:34.785 [2024-11-26 19:31:08.640155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:34.785 [2024-11-26 19:31:08.640644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:34.785 [2024-11-26 19:31:08.640658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:34.785 [2024-11-26 19:31:08.640664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:34.785 [2024-11-26 19:31:08.640813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:34.785 [2024-11-26 19:31:08.640964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:34.785 [2024-11-26 19:31:08.640970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:34.785 [2024-11-26 19:31:08.640976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:34.785 [2024-11-26 19:31:08.640981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.046 [2024-11-26 19:31:08.652781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.046 [2024-11-26 19:31:08.653228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.046 [2024-11-26 19:31:08.653259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.046 [2024-11-26 19:31:08.653268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.046 [2024-11-26 19:31:08.653438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.046 [2024-11-26 19:31:08.653592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.046 [2024-11-26 19:31:08.653599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.046 [2024-11-26 19:31:08.653605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.046 [2024-11-26 19:31:08.653611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.046 [2024-11-26 19:31:08.665418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.046 [2024-11-26 19:31:08.666018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.046 [2024-11-26 19:31:08.666049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.046 [2024-11-26 19:31:08.666058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.046 [2024-11-26 19:31:08.666231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.046 [2024-11-26 19:31:08.666389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.046 [2024-11-26 19:31:08.666396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.046 [2024-11-26 19:31:08.666402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.046 [2024-11-26 19:31:08.666409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.046 [2024-11-26 19:31:08.678068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.046 [2024-11-26 19:31:08.678644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.046 [2024-11-26 19:31:08.678677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.678686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.678851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.679005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.679012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.679017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.679023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.690693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.691173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.691206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.691215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.691382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.691536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.691543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.691549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.691556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.703363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.703858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.703874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.703880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.704030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.704186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.704194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.704203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.704208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.715999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.716458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.716472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.716478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.716628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.716777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.716784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.716789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.716795] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.728589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.729051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.729064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.729070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.729224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.729375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.729382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.729387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.729392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.741193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.741719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.741751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.741760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.741926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.742080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.742087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.742093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.742108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.753792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.754397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.754430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.754438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.754604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.754757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.754764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.754770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.754776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.766443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.767015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.767047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.767056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.767229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.767383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.767390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.767396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.767402] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.779062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.779668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.779700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.779708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.779874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.780028] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.780036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.780042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.780048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.791725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.792205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.792237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.047 [2024-11-26 19:31:08.792250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.047 [2024-11-26 19:31:08.792419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.047 [2024-11-26 19:31:08.792572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.047 [2024-11-26 19:31:08.792580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.047 [2024-11-26 19:31:08.792586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.047 [2024-11-26 19:31:08.792592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.047 [2024-11-26 19:31:08.804397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.047 [2024-11-26 19:31:08.804907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.047 [2024-11-26 19:31:08.804939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.804947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.805122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.805276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.805283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.805289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.805295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.048 [2024-11-26 19:31:08.817094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.048 [2024-11-26 19:31:08.817682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.048 [2024-11-26 19:31:08.817714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.817723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.817889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.818042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.818050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.818055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.818062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.048 [2024-11-26 19:31:08.829724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.048 [2024-11-26 19:31:08.830340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.048 [2024-11-26 19:31:08.830372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.830381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.830547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.830707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.830714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.830720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.830726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.048 [2024-11-26 19:31:08.842410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.048 [2024-11-26 19:31:08.843017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.048 [2024-11-26 19:31:08.843049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.843058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.843230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.843384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.843392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.843399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.843405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.048 7990.25 IOPS, 31.21 MiB/s [2024-11-26T18:31:08.913Z] [2024-11-26 19:31:08.855078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.048 [2024-11-26 19:31:08.855661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.048 [2024-11-26 19:31:08.855693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.855702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.855868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.856022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.856029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.856035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.856041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.048 [2024-11-26 19:31:08.867699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.048 [2024-11-26 19:31:08.868227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.048 [2024-11-26 19:31:08.868260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.868269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.868437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.868591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.868598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.868607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.868613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.048 [2024-11-26 19:31:08.880426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.048 [2024-11-26 19:31:08.880980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.048 [2024-11-26 19:31:08.881012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.881020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.881193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.881347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.881354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.881360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.881366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.048 [2024-11-26 19:31:08.893017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.048 [2024-11-26 19:31:08.893499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.048 [2024-11-26 19:31:08.893529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.893538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.893704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.893857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.893864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.893870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.893877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.048 [2024-11-26 19:31:08.905693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.048 [2024-11-26 19:31:08.906179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.048 [2024-11-26 19:31:08.906211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.048 [2024-11-26 19:31:08.906220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.048 [2024-11-26 19:31:08.906389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.048 [2024-11-26 19:31:08.906542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.048 [2024-11-26 19:31:08.906549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.048 [2024-11-26 19:31:08.906555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.048 [2024-11-26 19:31:08.906561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:08.918383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:08.918939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:08.918971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:08.918980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:08.919154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:08.919308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:08.919315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:08.919321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.310 [2024-11-26 19:31:08.919328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:08.930989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:08.931502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:08.931519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:08.931525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:08.931675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:08.931825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:08.931832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:08.931837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.310 [2024-11-26 19:31:08.931842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:08.943667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:08.944147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:08.944164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:08.944170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:08.944320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:08.944470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:08.944476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:08.944482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.310 [2024-11-26 19:31:08.944487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:08.956283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:08.956852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:08.956886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:08.956895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:08.957061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:08.957222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:08.957230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:08.957236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.310 [2024-11-26 19:31:08.957242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:08.968901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:08.969437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:08.969469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:08.969478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:08.969644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:08.969797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:08.969804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:08.969810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.310 [2024-11-26 19:31:08.969817] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:08.981626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:08.982204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:08.982236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:08.982245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:08.982413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:08.982567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:08.982574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:08.982580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.310 [2024-11-26 19:31:08.982585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:08.994259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:08.994830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:08.994862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:08.994871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:08.995040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:08.995201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:08.995209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:08.995214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.310 [2024-11-26 19:31:08.995221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:09.006879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:09.007443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:09.007475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:09.007483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:09.007649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:09.007803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:09.007810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:09.007817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.310 [2024-11-26 19:31:09.007824] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.310 [2024-11-26 19:31:09.019491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.310 [2024-11-26 19:31:09.020085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.310 [2024-11-26 19:31:09.020121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.310 [2024-11-26 19:31:09.020130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.310 [2024-11-26 19:31:09.020296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.310 [2024-11-26 19:31:09.020449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.310 [2024-11-26 19:31:09.020457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.310 [2024-11-26 19:31:09.020463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.020469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.032140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.032712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.032744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.032753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.032918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.033072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.033079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.033089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.033096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.044781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.045419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.045451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.045460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.045626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.045779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.045786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.045792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.045799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.057469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.058063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.058096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.058111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.058279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.058433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.058440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.058446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.058452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.070108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.070657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.070689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.070698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.070864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.071017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.071024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.071030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.071037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.082708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.083205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.083237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.083246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.083414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.083567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.083576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.083583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.083589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.095395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.095890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.095906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.095912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.096062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.096218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.096225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.096231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.096237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.108028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.108574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.108605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.108614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.108780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.108934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.108941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.108946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.108952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.120622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.121118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.121139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.121145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.121295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.121446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.121453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.121458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.121463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.133263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.133724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.133756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.133765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.133931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.134084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.311 [2024-11-26 19:31:09.134092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.311 [2024-11-26 19:31:09.134099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.311 [2024-11-26 19:31:09.134111] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.311 [2024-11-26 19:31:09.145933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.311 [2024-11-26 19:31:09.146423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.311 [2024-11-26 19:31:09.146455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.311 [2024-11-26 19:31:09.146464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.311 [2024-11-26 19:31:09.146629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.311 [2024-11-26 19:31:09.146782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.312 [2024-11-26 19:31:09.146789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.312 [2024-11-26 19:31:09.146796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.312 [2024-11-26 19:31:09.146803] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.312 [2024-11-26 19:31:09.158617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.312 [2024-11-26 19:31:09.159063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.312 [2024-11-26 19:31:09.159078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.312 [2024-11-26 19:31:09.159084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.312 [2024-11-26 19:31:09.159243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.312 [2024-11-26 19:31:09.159394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.312 [2024-11-26 19:31:09.159401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.312 [2024-11-26 19:31:09.159406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.312 [2024-11-26 19:31:09.159411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.312 [2024-11-26 19:31:09.171208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.312 [2024-11-26 19:31:09.171653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.312 [2024-11-26 19:31:09.171666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.312 [2024-11-26 19:31:09.171672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.312 [2024-11-26 19:31:09.171822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.312 [2024-11-26 19:31:09.171972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.312 [2024-11-26 19:31:09.171979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.312 [2024-11-26 19:31:09.171985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.312 [2024-11-26 19:31:09.171991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.571 [2024-11-26 19:31:09.183793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.571 [2024-11-26 19:31:09.184405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.571 [2024-11-26 19:31:09.184437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.571 [2024-11-26 19:31:09.184447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.571 [2024-11-26 19:31:09.184612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.571 [2024-11-26 19:31:09.184765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.571 [2024-11-26 19:31:09.184772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.571 [2024-11-26 19:31:09.184778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.571 [2024-11-26 19:31:09.184784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.571 [2024-11-26 19:31:09.196459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.571 [2024-11-26 19:31:09.197062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.571 [2024-11-26 19:31:09.197094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.571 [2024-11-26 19:31:09.197109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.571 [2024-11-26 19:31:09.197278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.571 [2024-11-26 19:31:09.197431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.571 [2024-11-26 19:31:09.197439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.571 [2024-11-26 19:31:09.197449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.571 [2024-11-26 19:31:09.197456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.571 [2024-11-26 19:31:09.209127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.571 [2024-11-26 19:31:09.209736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.571 [2024-11-26 19:31:09.209770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.571 [2024-11-26 19:31:09.209778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.571 [2024-11-26 19:31:09.209944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.210097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.210110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.210116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.210122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.221800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.222430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.222462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.222471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.222637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.222791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.222797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.222804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.222809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.234481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.234981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.234997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.235003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.235159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.235310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.235316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.235322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.235328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.247156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.247611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.247626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.247632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.247781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.247931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.247938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.247943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.247948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.259760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.260207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.260222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.260228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.260378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.260528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.260535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.260540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.260545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.272361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.272853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.272866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.272871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.273021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.273176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.273183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.273188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.273193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.285011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.285469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.285487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.285492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.285642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.285791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.285798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.285803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.285808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.297632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.298071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.298084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.298089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.298244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.298394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.298401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.298406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.298412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.310237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.310749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.310781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.310790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.310955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.311118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.311127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.311132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.311139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.322960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.323557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.323589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.323598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.323768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.323922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.323929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.323936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.323942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.335615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.335959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.572 [2024-11-26 19:31:09.335976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.572 [2024-11-26 19:31:09.335982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.572 [2024-11-26 19:31:09.336138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.572 [2024-11-26 19:31:09.336290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.572 [2024-11-26 19:31:09.336296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.572 [2024-11-26 19:31:09.336302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.572 [2024-11-26 19:31:09.336307] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.572 [2024-11-26 19:31:09.348273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.572 [2024-11-26 19:31:09.348817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.573 [2024-11-26 19:31:09.348850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.573 [2024-11-26 19:31:09.348859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.573 [2024-11-26 19:31:09.349025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.573 [2024-11-26 19:31:09.349185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.573 [2024-11-26 19:31:09.349193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.573 [2024-11-26 19:31:09.349198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.573 [2024-11-26 19:31:09.349205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.573 [2024-11-26 19:31:09.360866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.573 [2024-11-26 19:31:09.361422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.573 [2024-11-26 19:31:09.361454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.573 [2024-11-26 19:31:09.361463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.573 [2024-11-26 19:31:09.361630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.573 [2024-11-26 19:31:09.361784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.573 [2024-11-26 19:31:09.361792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.573 [2024-11-26 19:31:09.361802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.573 [2024-11-26 19:31:09.361808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.573 [2024-11-26 19:31:09.373485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.573 [2024-11-26 19:31:09.373983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.573 [2024-11-26 19:31:09.374000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.573 [2024-11-26 19:31:09.374006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.573 [2024-11-26 19:31:09.374160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.573 [2024-11-26 19:31:09.374312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.573 [2024-11-26 19:31:09.374318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.573 [2024-11-26 19:31:09.374324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.573 [2024-11-26 19:31:09.374329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.573 [2024-11-26 19:31:09.386144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.573 [2024-11-26 19:31:09.386635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.573 [2024-11-26 19:31:09.386649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.573 [2024-11-26 19:31:09.386655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.573 [2024-11-26 19:31:09.386804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.573 [2024-11-26 19:31:09.386954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.573 [2024-11-26 19:31:09.386961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.573 [2024-11-26 19:31:09.386967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.573 [2024-11-26 19:31:09.386972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.573 [2024-11-26 19:31:09.398788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.573 [2024-11-26 19:31:09.399242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.573 [2024-11-26 19:31:09.399256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.573 [2024-11-26 19:31:09.399262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.573 [2024-11-26 19:31:09.399412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.573 [2024-11-26 19:31:09.399561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.573 [2024-11-26 19:31:09.399568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.573 [2024-11-26 19:31:09.399574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.573 [2024-11-26 19:31:09.399579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.573 [2024-11-26 19:31:09.411383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.573 [2024-11-26 19:31:09.411824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.573 [2024-11-26 19:31:09.411838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.573 [2024-11-26 19:31:09.411844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.573 [2024-11-26 19:31:09.411993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.573 [2024-11-26 19:31:09.412148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.573 [2024-11-26 19:31:09.412154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.573 [2024-11-26 19:31:09.412159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.573 [2024-11-26 19:31:09.412164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.573 [2024-11-26 19:31:09.423968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.573 [2024-11-26 19:31:09.424419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.573 [2024-11-26 19:31:09.424433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.573 [2024-11-26 19:31:09.424439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.573 [2024-11-26 19:31:09.424589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.573 [2024-11-26 19:31:09.424739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.573 [2024-11-26 19:31:09.424746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.573 [2024-11-26 19:31:09.424751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.573 [2024-11-26 19:31:09.424756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.833 [2024-11-26 19:31:09.436694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.833 [2024-11-26 19:31:09.437178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.833 [2024-11-26 19:31:09.437193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.833 [2024-11-26 19:31:09.437198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.833 [2024-11-26 19:31:09.437348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.833 [2024-11-26 19:31:09.437498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.833 [2024-11-26 19:31:09.437504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.833 [2024-11-26 19:31:09.437510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.833 [2024-11-26 19:31:09.437515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.833 [2024-11-26 19:31:09.449333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.833 [2024-11-26 19:31:09.449773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.833 [2024-11-26 19:31:09.449792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.833 [2024-11-26 19:31:09.449798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.833 [2024-11-26 19:31:09.449948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.833 [2024-11-26 19:31:09.450098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.833 [2024-11-26 19:31:09.450109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.833 [2024-11-26 19:31:09.450115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.833 [2024-11-26 19:31:09.450120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.833 [2024-11-26 19:31:09.461920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.833 [2024-11-26 19:31:09.462374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.833 [2024-11-26 19:31:09.462389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.833 [2024-11-26 19:31:09.462395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.833 [2024-11-26 19:31:09.462544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.462694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.462700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.462705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.462711] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.474512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.474950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.834 [2024-11-26 19:31:09.474965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.834 [2024-11-26 19:31:09.474971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.834 [2024-11-26 19:31:09.475125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.475276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.475282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.475287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.475293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.487103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.487680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.834 [2024-11-26 19:31:09.487712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.834 [2024-11-26 19:31:09.487721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.834 [2024-11-26 19:31:09.487890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.488044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.488051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.488057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.488063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.499739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.500232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.834 [2024-11-26 19:31:09.500264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.834 [2024-11-26 19:31:09.500273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.834 [2024-11-26 19:31:09.500441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.500595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.500602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.500609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.500615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.512442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.512934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.834 [2024-11-26 19:31:09.512949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.834 [2024-11-26 19:31:09.512956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.834 [2024-11-26 19:31:09.513110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.513262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.513268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.513273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.513279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.525087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.525536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.834 [2024-11-26 19:31:09.525551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.834 [2024-11-26 19:31:09.525557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.834 [2024-11-26 19:31:09.525707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.525857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.525864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.525874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.525879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.537688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.538127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.834 [2024-11-26 19:31:09.538143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.834 [2024-11-26 19:31:09.538149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.834 [2024-11-26 19:31:09.538300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.538451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.538458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.538463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.538468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.550287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.550783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.834 [2024-11-26 19:31:09.550797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.834 [2024-11-26 19:31:09.550803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.834 [2024-11-26 19:31:09.550952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.551106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.551113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.551119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.551125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.562918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.563386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.834 [2024-11-26 19:31:09.563399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.834 [2024-11-26 19:31:09.563405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.834 [2024-11-26 19:31:09.563554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.834 [2024-11-26 19:31:09.563704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.834 [2024-11-26 19:31:09.563710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.834 [2024-11-26 19:31:09.563715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.834 [2024-11-26 19:31:09.563720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.834 [2024-11-26 19:31:09.575609] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.834 [2024-11-26 19:31:09.576091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.576111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.576117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.576266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.576416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.576424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.576429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.576435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.588273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.588731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.588745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.588751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.588900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.589051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.589058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.589063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.589068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.600877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.601430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.601463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.601471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.601637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.601791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.601798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.601804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.601811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.613493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.613992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.614012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.614019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.614175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.614327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.614334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.614340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.614345] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.626159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.626609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.626623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.626629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.626779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.626929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.626935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.626941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.626946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.638760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.639112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.639128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.639133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.639283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.639433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.639439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.639445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.639449] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.651418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.651857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.651872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.651878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.652030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.652187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.652194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.652200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.652205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.664009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.664724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.664744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.664750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.664907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.665058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.665065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.665071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.665076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.676597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.677042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.677056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.677062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.677215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.677366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.677372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.677377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.677382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:35.835 [2024-11-26 19:31:09.689211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:35.835 [2024-11-26 19:31:09.689658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:35.835 [2024-11-26 19:31:09.689672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:35.835 [2024-11-26 19:31:09.689678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:35.835 [2024-11-26 19:31:09.689827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:35.835 [2024-11-26 19:31:09.689977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:35.835 [2024-11-26 19:31:09.689983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:35.835 [2024-11-26 19:31:09.689992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:35.835 [2024-11-26 19:31:09.689998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.097 [2024-11-26 19:31:09.701791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.097 [2024-11-26 19:31:09.702496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.097 [2024-11-26 19:31:09.702528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.097 [2024-11-26 19:31:09.702537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.097 [2024-11-26 19:31:09.702705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.097 [2024-11-26 19:31:09.702859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.097 [2024-11-26 19:31:09.702866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.097 [2024-11-26 19:31:09.702873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.097 [2024-11-26 19:31:09.702880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.097 [2024-11-26 19:31:09.714435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.097 [2024-11-26 19:31:09.714932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.097 [2024-11-26 19:31:09.714947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.097 [2024-11-26 19:31:09.714954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.097 [2024-11-26 19:31:09.715112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.097 [2024-11-26 19:31:09.715263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.097 [2024-11-26 19:31:09.715270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.097 [2024-11-26 19:31:09.715276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.097 [2024-11-26 19:31:09.715281] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.097 [2024-11-26 19:31:09.727079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.097 [2024-11-26 19:31:09.727541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.097 [2024-11-26 19:31:09.727556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.097 [2024-11-26 19:31:09.727562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.097 [2024-11-26 19:31:09.727712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.097 [2024-11-26 19:31:09.727861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.097 [2024-11-26 19:31:09.727868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.097 [2024-11-26 19:31:09.727873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.097 [2024-11-26 19:31:09.727878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.097 [2024-11-26 19:31:09.739684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.097 [2024-11-26 19:31:09.740303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.097 [2024-11-26 19:31:09.740335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.097 [2024-11-26 19:31:09.740344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.097 [2024-11-26 19:31:09.740510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.097 [2024-11-26 19:31:09.740664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.097 [2024-11-26 19:31:09.740671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.097 [2024-11-26 19:31:09.740677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.097 [2024-11-26 19:31:09.740683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.097 [2024-11-26 19:31:09.752366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.097 [2024-11-26 19:31:09.752961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.097 [2024-11-26 19:31:09.752993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.097 [2024-11-26 19:31:09.753002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.097 [2024-11-26 19:31:09.753178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.097 [2024-11-26 19:31:09.753332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.097 [2024-11-26 19:31:09.753339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.097 [2024-11-26 19:31:09.753345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.097 [2024-11-26 19:31:09.753352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.097 [2024-11-26 19:31:09.765019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.097 [2024-11-26 19:31:09.765628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.097 [2024-11-26 19:31:09.765660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.097 [2024-11-26 19:31:09.765668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.097 [2024-11-26 19:31:09.765834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.097 [2024-11-26 19:31:09.765987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.097 [2024-11-26 19:31:09.765994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.097 [2024-11-26 19:31:09.766000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.097 [2024-11-26 19:31:09.766006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.097 [2024-11-26 19:31:09.777688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.097 [2024-11-26 19:31:09.778240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.097 [2024-11-26 19:31:09.778275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.097 [2024-11-26 19:31:09.778284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.097 [2024-11-26 19:31:09.778450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.097 [2024-11-26 19:31:09.778604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.097 [2024-11-26 19:31:09.778611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.097 [2024-11-26 19:31:09.778616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.097 [2024-11-26 19:31:09.778622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.097 [2024-11-26 19:31:09.790299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.097 [2024-11-26 19:31:09.790854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.790886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.790895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.791061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.791225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.791234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.791240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.791246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.802922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.803482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.803513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.803522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.803688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.803841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.803848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.803854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.803860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.815528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.816122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.816153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.816162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.816331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.816487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.816494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.816499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.816505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.828167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.828653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.828684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.828693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.828860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.829014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.829021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.829027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.829033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.840843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.841408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.841440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.841449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.841615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.841768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.841776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.841781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.841787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 6392.20 IOPS, 24.97 MiB/s [2024-11-26T18:31:09.963Z] [2024-11-26 19:31:09.853470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.854035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.854067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.854075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.854249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.854404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.854411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.854420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.854427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.866083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.866659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.866691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.866700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.866866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.867019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.867026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.867032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.867038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.878708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.879313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.879345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.879354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.879520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.879674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.879681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.879687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.879693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.891361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.891910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.891942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.891951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.892126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.892280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.892288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.892294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.892300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.903958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.098 [2024-11-26 19:31:09.904560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.098 [2024-11-26 19:31:09.904592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.098 [2024-11-26 19:31:09.904600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.098 [2024-11-26 19:31:09.904766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.098 [2024-11-26 19:31:09.904920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.098 [2024-11-26 19:31:09.904928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.098 [2024-11-26 19:31:09.904933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.098 [2024-11-26 19:31:09.904939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.098 [2024-11-26 19:31:09.916614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.099 [2024-11-26 19:31:09.917213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.099 [2024-11-26 19:31:09.917244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.099 [2024-11-26 19:31:09.917253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.099 [2024-11-26 19:31:09.917421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.099 [2024-11-26 19:31:09.917574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.099 [2024-11-26 19:31:09.917581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.099 [2024-11-26 19:31:09.917587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.099 [2024-11-26 19:31:09.917593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.099 [2024-11-26 19:31:09.929257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.099 [2024-11-26 19:31:09.929793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.099 [2024-11-26 19:31:09.929825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.099 [2024-11-26 19:31:09.929834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.099 [2024-11-26 19:31:09.929999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.099 [2024-11-26 19:31:09.930161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.099 [2024-11-26 19:31:09.930169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.099 [2024-11-26 19:31:09.930175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.099 [2024-11-26 19:31:09.930181] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.099 [2024-11-26 19:31:09.941985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.099 [2024-11-26 19:31:09.942569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.099 [2024-11-26 19:31:09.942605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.099 [2024-11-26 19:31:09.942613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.099 [2024-11-26 19:31:09.942779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.099 [2024-11-26 19:31:09.942933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.099 [2024-11-26 19:31:09.942940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.099 [2024-11-26 19:31:09.942946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.099 [2024-11-26 19:31:09.942953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.099 [2024-11-26 19:31:09.954627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.099 [2024-11-26 19:31:09.954986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.099 [2024-11-26 19:31:09.955003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.099 [2024-11-26 19:31:09.955009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.099 [2024-11-26 19:31:09.955165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.099 [2024-11-26 19:31:09.955316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.099 [2024-11-26 19:31:09.955323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.099 [2024-11-26 19:31:09.955328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.099 [2024-11-26 19:31:09.955334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.360 [2024-11-26 19:31:09.967293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.360 [2024-11-26 19:31:09.967881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.360 [2024-11-26 19:31:09.967913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.360 [2024-11-26 19:31:09.967922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.360 [2024-11-26 19:31:09.968088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.360 [2024-11-26 19:31:09.968250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.360 [2024-11-26 19:31:09.968258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.360 [2024-11-26 19:31:09.968264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.360 [2024-11-26 19:31:09.968270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.360 [2024-11-26 19:31:09.979919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.360 [2024-11-26 19:31:09.980499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.360 [2024-11-26 19:31:09.980531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.360 [2024-11-26 19:31:09.980540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.360 [2024-11-26 19:31:09.980709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.360 [2024-11-26 19:31:09.980863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.360 [2024-11-26 19:31:09.980870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.360 [2024-11-26 19:31:09.980876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:09.980882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:09.992549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:09.993005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:09.993021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:09.993027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:09.993182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:09.993333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:09.993340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:09.993346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:09.993351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.005705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.006044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.006059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.006065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.006226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.006376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.006383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:10.006388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:10.006394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.018361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.018848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.018862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.018868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.019018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.019175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.019182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:10.019191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:10.019197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.031015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.031457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.031471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.031477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.031626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.031777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.031783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:10.031788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:10.031794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.043625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.044069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.044083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.044089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.044244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.044403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.044410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:10.044416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:10.044421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.056243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.056783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.056815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.056824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.056990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.057152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.057161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:10.057167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:10.057174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.068846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.069310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.069327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.069333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.069484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.069634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.069641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:10.069646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:10.069652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.081462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.081938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.081953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.081959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.082116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.082268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.082275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:10.082280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:10.082286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.094134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.094716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.094747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.094757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.094922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.095076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.095084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.361 [2024-11-26 19:31:10.095090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.361 [2024-11-26 19:31:10.095097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.361 [2024-11-26 19:31:10.106775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.361 [2024-11-26 19:31:10.107330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.361 [2024-11-26 19:31:10.107366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.361 [2024-11-26 19:31:10.107374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.361 [2024-11-26 19:31:10.107540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.361 [2024-11-26 19:31:10.107693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.361 [2024-11-26 19:31:10.107701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.107706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.107713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.119392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.119982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.120014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.120023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.120198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.120352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.120359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.120365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.120371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.132043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.132641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.132673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.132681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.132848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.133001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.133008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.133014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.133021] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.144715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.145312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.145345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.145353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.145522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.145676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.145684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.145689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.145696] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.157370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.157941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.157973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.157981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.158155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.158310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.158317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.158323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.158329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.169998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.170493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.170508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.170514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.170665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.170815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.170822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.170827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.170832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.182640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.183090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.183108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.183114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.183264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.183414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.183421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.183429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.183434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.195246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.195656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.195670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.195675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.195824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.195975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.195981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.195986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.195991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.207940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.208419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.208432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.208438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.208587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.208736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.208743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.208748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.208753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.362 [2024-11-26 19:31:10.220563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.362 [2024-11-26 19:31:10.220917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.362 [2024-11-26 19:31:10.220930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.362 [2024-11-26 19:31:10.220935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.362 [2024-11-26 19:31:10.221085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.362 [2024-11-26 19:31:10.221241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.362 [2024-11-26 19:31:10.221248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.362 [2024-11-26 19:31:10.221253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.362 [2024-11-26 19:31:10.221258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.233217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.233652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.624 [2024-11-26 19:31:10.233666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.624 [2024-11-26 19:31:10.233671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.624 [2024-11-26 19:31:10.233820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.624 [2024-11-26 19:31:10.233970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.624 [2024-11-26 19:31:10.233977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.624 [2024-11-26 19:31:10.233982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.624 [2024-11-26 19:31:10.233987] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.245831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.246175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.624 [2024-11-26 19:31:10.246190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.624 [2024-11-26 19:31:10.246196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.624 [2024-11-26 19:31:10.246347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.624 [2024-11-26 19:31:10.246497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.624 [2024-11-26 19:31:10.246503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.624 [2024-11-26 19:31:10.246509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.624 [2024-11-26 19:31:10.246514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.258468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.258873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.624 [2024-11-26 19:31:10.258887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.624 [2024-11-26 19:31:10.258892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.624 [2024-11-26 19:31:10.259042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.624 [2024-11-26 19:31:10.259197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.624 [2024-11-26 19:31:10.259204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.624 [2024-11-26 19:31:10.259209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.624 [2024-11-26 19:31:10.259215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.271174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.271609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.624 [2024-11-26 19:31:10.271626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.624 [2024-11-26 19:31:10.271632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.624 [2024-11-26 19:31:10.271782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.624 [2024-11-26 19:31:10.271931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.624 [2024-11-26 19:31:10.271938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.624 [2024-11-26 19:31:10.271943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.624 [2024-11-26 19:31:10.271948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.283757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.284300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.624 [2024-11-26 19:31:10.284331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.624 [2024-11-26 19:31:10.284340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.624 [2024-11-26 19:31:10.284506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.624 [2024-11-26 19:31:10.284660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.624 [2024-11-26 19:31:10.284667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.624 [2024-11-26 19:31:10.284673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.624 [2024-11-26 19:31:10.284679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.296348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.296951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.624 [2024-11-26 19:31:10.296983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.624 [2024-11-26 19:31:10.296992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.624 [2024-11-26 19:31:10.297165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.624 [2024-11-26 19:31:10.297319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.624 [2024-11-26 19:31:10.297326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.624 [2024-11-26 19:31:10.297332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.624 [2024-11-26 19:31:10.297338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.309003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.309593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.624 [2024-11-26 19:31:10.309626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.624 [2024-11-26 19:31:10.309634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.624 [2024-11-26 19:31:10.309803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.624 [2024-11-26 19:31:10.309957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.624 [2024-11-26 19:31:10.309964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.624 [2024-11-26 19:31:10.309970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.624 [2024-11-26 19:31:10.309977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.321646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.322221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.624 [2024-11-26 19:31:10.322253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.624 [2024-11-26 19:31:10.322262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.624 [2024-11-26 19:31:10.322429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.624 [2024-11-26 19:31:10.322583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.624 [2024-11-26 19:31:10.322591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.624 [2024-11-26 19:31:10.322596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.624 [2024-11-26 19:31:10.322602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.624 [2024-11-26 19:31:10.334275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.624 [2024-11-26 19:31:10.334847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.334879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.334888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.335054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.335215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.335224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.335229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.335235] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.346910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.347522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.347554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.347563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.347729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.347882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.347889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.347899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.347905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.359576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.360060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.360091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.360107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.360274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.360428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.360435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.360440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.360447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.372256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.372856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.372888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.372897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.373062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.373223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.373231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.373236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.373242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.384896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.385461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.385492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.385501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.385667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.385820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.385827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.385833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.385839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.397503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.398132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.398164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.398173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.398340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.398493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.398501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.398507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.398513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.410178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.410766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.410798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.410807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.410972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.411132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.411140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.411146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.411152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.422816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.423393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.423425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.423433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.423599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.423753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.423760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.423766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.423772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.435438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.435996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.436031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.436039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.436211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.436365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.436372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.625 [2024-11-26 19:31:10.436378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.625 [2024-11-26 19:31:10.436384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.625 [2024-11-26 19:31:10.448056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.625 [2024-11-26 19:31:10.448648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.625 [2024-11-26 19:31:10.448680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.625 [2024-11-26 19:31:10.448689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.625 [2024-11-26 19:31:10.448854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.625 [2024-11-26 19:31:10.449008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.625 [2024-11-26 19:31:10.449015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.626 [2024-11-26 19:31:10.449021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.626 [2024-11-26 19:31:10.449027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.626 [2024-11-26 19:31:10.460691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.626 [2024-11-26 19:31:10.461196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.626 [2024-11-26 19:31:10.461228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.626 [2024-11-26 19:31:10.461237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.626 [2024-11-26 19:31:10.461405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.626 [2024-11-26 19:31:10.461558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.626 [2024-11-26 19:31:10.461565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.626 [2024-11-26 19:31:10.461572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.626 [2024-11-26 19:31:10.461578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.626 [2024-11-26 19:31:10.473383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.626 [2024-11-26 19:31:10.473883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.626 [2024-11-26 19:31:10.473899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.626 [2024-11-26 19:31:10.473905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.626 [2024-11-26 19:31:10.474058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.626 [2024-11-26 19:31:10.474215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.626 [2024-11-26 19:31:10.474221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.626 [2024-11-26 19:31:10.474226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.626 [2024-11-26 19:31:10.474232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.626 [2024-11-26 19:31:10.486041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.626 [2024-11-26 19:31:10.486466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.626 [2024-11-26 19:31:10.486498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.626 [2024-11-26 19:31:10.486506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.626 [2024-11-26 19:31:10.486672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.910 [2024-11-26 19:31:10.486826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.910 [2024-11-26 19:31:10.486833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.910 [2024-11-26 19:31:10.486839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.910 [2024-11-26 19:31:10.486846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.910 [2024-11-26 19:31:10.498663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.910 [2024-11-26 19:31:10.499111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.910 [2024-11-26 19:31:10.499128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.910 [2024-11-26 19:31:10.499134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.910 [2024-11-26 19:31:10.499284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.910 [2024-11-26 19:31:10.499435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.911 [2024-11-26 19:31:10.499441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.911 [2024-11-26 19:31:10.499447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.911 [2024-11-26 19:31:10.499452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3895429 Killed "${NVMF_APP[@]}" "$@" 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3897130 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3897130 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3897130 ']' 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.911 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:36.911 [2024-11-26 19:31:10.511260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.911 [2024-11-26 19:31:10.511832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.911 [2024-11-26 19:31:10.511865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.911 [2024-11-26 19:31:10.511875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.911 [2024-11-26 19:31:10.512041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.911 [2024-11-26 19:31:10.512202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.911 [2024-11-26 19:31:10.512210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.911 [2024-11-26 19:31:10.512216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.911 [2024-11-26 19:31:10.512222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.911 [2024-11-26 19:31:10.523890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.911 [2024-11-26 19:31:10.524293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.911 [2024-11-26 19:31:10.524309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.911 [2024-11-26 19:31:10.524315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.911 [2024-11-26 19:31:10.524465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.911 [2024-11-26 19:31:10.524616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.911 [2024-11-26 19:31:10.524622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.911 [2024-11-26 19:31:10.524627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.911 [2024-11-26 19:31:10.524633] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.911 [2024-11-26 19:31:10.536583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.911 [2024-11-26 19:31:10.537037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.911 [2024-11-26 19:31:10.537050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.911 [2024-11-26 19:31:10.537056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.911 [2024-11-26 19:31:10.537212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.911 [2024-11-26 19:31:10.537362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.911 [2024-11-26 19:31:10.537373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.911 [2024-11-26 19:31:10.537379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.911 [2024-11-26 19:31:10.537384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.911 [2024-11-26 19:31:10.545575] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:36.911 [2024-11-26 19:31:10.545620] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.911 [2024-11-26 19:31:10.549209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.911 [2024-11-26 19:31:10.549805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.911 [2024-11-26 19:31:10.549838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.911 [2024-11-26 19:31:10.549847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.911 [2024-11-26 19:31:10.550014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.911 [2024-11-26 19:31:10.550174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.911 [2024-11-26 19:31:10.550183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.911 [2024-11-26 19:31:10.550189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.911 [2024-11-26 19:31:10.550195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.911 [2024-11-26 19:31:10.561863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.911 [2024-11-26 19:31:10.562361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.911 [2024-11-26 19:31:10.562393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.911 [2024-11-26 19:31:10.562402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.911 [2024-11-26 19:31:10.562567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.911 [2024-11-26 19:31:10.562721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.911 [2024-11-26 19:31:10.562728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.911 [2024-11-26 19:31:10.562734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.911 [2024-11-26 19:31:10.562741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.911 [2024-11-26 19:31:10.574556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.911 [2024-11-26 19:31:10.575118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.911 [2024-11-26 19:31:10.575149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.911 [2024-11-26 19:31:10.575158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.911 [2024-11-26 19:31:10.575325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.911 [2024-11-26 19:31:10.575481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.911 [2024-11-26 19:31:10.575489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.911 [2024-11-26 19:31:10.575495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.911 [2024-11-26 19:31:10.575500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.911 [2024-11-26 19:31:10.587176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.911 [2024-11-26 19:31:10.587740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.911 [2024-11-26 19:31:10.587771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.911 [2024-11-26 19:31:10.587780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.911 [2024-11-26 19:31:10.587947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.911 [2024-11-26 19:31:10.588107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.911 [2024-11-26 19:31:10.588114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.911 [2024-11-26 19:31:10.588120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.911 [2024-11-26 19:31:10.588127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.911 [2024-11-26 19:31:10.599799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.912 [2024-11-26 19:31:10.600286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.912 [2024-11-26 19:31:10.600302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.912 [2024-11-26 19:31:10.600309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.912 [2024-11-26 19:31:10.600460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.912 [2024-11-26 19:31:10.600611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.912 [2024-11-26 19:31:10.600619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.912 [2024-11-26 19:31:10.600624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.912 [2024-11-26 19:31:10.600630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.912 [2024-11-26 19:31:10.612516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.912 [2024-11-26 19:31:10.613130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.912 [2024-11-26 19:31:10.613161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.912 [2024-11-26 19:31:10.613170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.912 [2024-11-26 19:31:10.613336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.912 [2024-11-26 19:31:10.613489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.912 [2024-11-26 19:31:10.613497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.912 [2024-11-26 19:31:10.613506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.912 [2024-11-26 19:31:10.613512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.912 [2024-11-26 19:31:10.617081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:36.912 [2024-11-26 19:31:10.625186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.912 [2024-11-26 19:31:10.625799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.912 [2024-11-26 19:31:10.625831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.912 [2024-11-26 19:31:10.625841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.912 [2024-11-26 19:31:10.626007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.912 [2024-11-26 19:31:10.626167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.912 [2024-11-26 19:31:10.626175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.912 [2024-11-26 19:31:10.626181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.912 [2024-11-26 19:31:10.626187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.912 [2024-11-26 19:31:10.637855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.912 [2024-11-26 19:31:10.638116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.912 [2024-11-26 19:31:10.638132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.912 [2024-11-26 19:31:10.638138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.912 [2024-11-26 19:31:10.638289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.912 [2024-11-26 19:31:10.638439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.912 [2024-11-26 19:31:10.638447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.912 [2024-11-26 19:31:10.638452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.912 [2024-11-26 19:31:10.638457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.912 [2024-11-26 19:31:10.646202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.912 [2024-11-26 19:31:10.646224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.912 [2024-11-26 19:31:10.646230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.912 [2024-11-26 19:31:10.646236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.912 [2024-11-26 19:31:10.646240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.912 [2024-11-26 19:31:10.647417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:36.912 [2024-11-26 19:31:10.647626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.912 [2024-11-26 19:31:10.647627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:36.912 [2024-11-26 19:31:10.650577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.912 [2024-11-26 19:31:10.651111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.912 [2024-11-26 19:31:10.651126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.912 [2024-11-26 19:31:10.651136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.912 [2024-11-26 19:31:10.651287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.912 [2024-11-26 19:31:10.651437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.912 [2024-11-26 19:31:10.651444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.912 [2024-11-26 19:31:10.651449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.912 [2024-11-26 19:31:10.651454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.912 [2024-11-26 19:31:10.663268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.912 [2024-11-26 19:31:10.663837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.912 [2024-11-26 19:31:10.663872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.912 [2024-11-26 19:31:10.663882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.912 [2024-11-26 19:31:10.664053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.912 [2024-11-26 19:31:10.664213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.912 [2024-11-26 19:31:10.664221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.912 [2024-11-26 19:31:10.664227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.912 [2024-11-26 19:31:10.664233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.912 [2024-11-26 19:31:10.675954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.912 [2024-11-26 19:31:10.676536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.912 [2024-11-26 19:31:10.676569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.912 [2024-11-26 19:31:10.676578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.912 [2024-11-26 19:31:10.676746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.912 [2024-11-26 19:31:10.676900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.912 [2024-11-26 19:31:10.676906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.912 [2024-11-26 19:31:10.676913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.912 [2024-11-26 19:31:10.676920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.912 [2024-11-26 19:31:10.688597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.912 [2024-11-26 19:31:10.689120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.912 [2024-11-26 19:31:10.689137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.912 [2024-11-26 19:31:10.689143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.912 [2024-11-26 19:31:10.689294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.912 [2024-11-26 19:31:10.689451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.912 [2024-11-26 19:31:10.689458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.912 [2024-11-26 19:31:10.689465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.912 [2024-11-26 19:31:10.689470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.912 [2024-11-26 19:31:10.701281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.913 [2024-11-26 19:31:10.701890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.913 [2024-11-26 19:31:10.701923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.913 [2024-11-26 19:31:10.701932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.913 [2024-11-26 19:31:10.702106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.913 [2024-11-26 19:31:10.702261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.913 [2024-11-26 19:31:10.702280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.913 [2024-11-26 19:31:10.702286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.913 [2024-11-26 19:31:10.702292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.913 [2024-11-26 19:31:10.713959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.913 [2024-11-26 19:31:10.714541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.913 [2024-11-26 19:31:10.714574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.913 [2024-11-26 19:31:10.714583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.913 [2024-11-26 19:31:10.714752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.913 [2024-11-26 19:31:10.714906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.913 [2024-11-26 19:31:10.714913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.913 [2024-11-26 19:31:10.714919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.913 [2024-11-26 19:31:10.714926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.913 [2024-11-26 19:31:10.726606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.913 [2024-11-26 19:31:10.727084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.913 [2024-11-26 19:31:10.727105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.913 [2024-11-26 19:31:10.727112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.913 [2024-11-26 19:31:10.727267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.913 [2024-11-26 19:31:10.727417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.913 [2024-11-26 19:31:10.727424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.913 [2024-11-26 19:31:10.727429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.913 [2024-11-26 19:31:10.727434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.913 [2024-11-26 19:31:10.739263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:36.913 [2024-11-26 19:31:10.739738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:36.913 [2024-11-26 19:31:10.739753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:36.913 [2024-11-26 19:31:10.739758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:36.913 [2024-11-26 19:31:10.739908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:36.913 [2024-11-26 19:31:10.740058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:36.913 [2024-11-26 19:31:10.740065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:36.913 [2024-11-26 19:31:10.740070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:36.913 [2024-11-26 19:31:10.740075] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.913 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.174 [2024-11-26 19:31:10.751901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:37.174 [2024-11-26 19:31:10.752387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.174 [2024-11-26 19:31:10.752402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:37.174 [2024-11-26 19:31:10.752407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:37.174 [2024-11-26 19:31:10.752557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:37.174 [2024-11-26 19:31:10.752707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:37.174 [2024-11-26 19:31:10.752714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:37.174 [2024-11-26 19:31:10.752720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:37.174 [2024-11-26 19:31:10.752725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:37.174 [2024-11-26 19:31:10.754890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.174 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.174 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:37.174 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.174 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.174 [2024-11-26 19:31:10.764531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:37.175 [2024-11-26 19:31:10.765083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.175 [2024-11-26 19:31:10.765121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:37.175 [2024-11-26 19:31:10.765131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:37.175 [2024-11-26 19:31:10.765300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:37.175 [2024-11-26 19:31:10.765454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:37.175 [2024-11-26 19:31:10.765461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:37.175 [2024-11-26 19:31:10.765467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:37.175 [2024-11-26 19:31:10.765473] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:37.175 [2024-11-26 19:31:10.777151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:37.175 [2024-11-26 19:31:10.777631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.175 [2024-11-26 19:31:10.777662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:37.175 [2024-11-26 19:31:10.777671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:37.175 [2024-11-26 19:31:10.777839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:37.175 [2024-11-26 19:31:10.777993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:37.175 [2024-11-26 19:31:10.778001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:37.175 [2024-11-26 19:31:10.778006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:37.175 [2024-11-26 19:31:10.778012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:37.175 Malloc0 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.175 [2024-11-26 19:31:10.789831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:37.175 [2024-11-26 19:31:10.790466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.175 [2024-11-26 19:31:10.790498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:37.175 [2024-11-26 19:31:10.790507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:37.175 [2024-11-26 19:31:10.790673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:37.175 [2024-11-26 19:31:10.790826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:37.175 [2024-11-26 19:31:10.790833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:37.175 [2024-11-26 19:31:10.790843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:37.175 [2024-11-26 19:31:10.790850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:37.175 [2024-11-26 19:31:10.802533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:37.175 [2024-11-26 19:31:10.803132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.175 [2024-11-26 19:31:10.803164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205e6a0 with addr=10.0.0.2, port=4420 00:24:37.175 [2024-11-26 19:31:10.803173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205e6a0 is same with the state(6) to be set 00:24:37.175 [2024-11-26 19:31:10.803342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205e6a0 (9): Bad file descriptor 00:24:37.175 [2024-11-26 19:31:10.803495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:24:37.175 [2024-11-26 19:31:10.803502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:24:37.175 [2024-11-26 19:31:10.803508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:24:37.175 [2024-11-26 19:31:10.803514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:24:37.175 [2024-11-26 19:31:10.804833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.175 19:31:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3895817 00:24:37.175 [2024-11-26 19:31:10.815195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:24:37.175 [2024-11-26 19:31:10.845931] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:24:38.114 5328.50 IOPS, 20.81 MiB/s [2024-11-26T18:31:13.032Z] 6414.86 IOPS, 25.06 MiB/s [2024-11-26T18:31:13.983Z] 7220.38 IOPS, 28.20 MiB/s [2024-11-26T18:31:14.923Z] 7851.22 IOPS, 30.67 MiB/s [2024-11-26T18:31:16.303Z] 8351.10 IOPS, 32.62 MiB/s [2024-11-26T18:31:16.872Z] 8770.00 IOPS, 34.26 MiB/s [2024-11-26T18:31:18.253Z] 9110.25 IOPS, 35.59 MiB/s [2024-11-26T18:31:19.193Z] 9398.23 IOPS, 36.71 MiB/s [2024-11-26T18:31:20.132Z] 9657.50 IOPS, 37.72 MiB/s 00:24:46.267 Latency(us) 00:24:46.267 [2024-11-26T18:31:20.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.267 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:46.267 Verification LBA range: start 0x0 length 0x4000 00:24:46.267 Nvme1n1 : 15.01 9872.56 38.56 11999.18 0.00 5834.31 737.28 16165.55 00:24:46.267 [2024-11-26T18:31:20.132Z] =================================================================================================================== 00:24:46.267 [2024-11-26T18:31:20.132Z] Total : 9872.56 38.56 11999.18 0.00 5834.31 737.28 16165.55 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:46.267 19:31:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:46.267 rmmod nvme_tcp 00:24:46.267 rmmod nvme_fabrics 00:24:46.267 rmmod nvme_keyring 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3897130 ']' 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3897130 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3897130 ']' 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3897130 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3897130 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3897130' 00:24:46.267 killing process with pid 3897130 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3897130 00:24:46.267 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3897130 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.526 19:31:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.433 19:31:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.433 00:24:48.433 real 0m24.974s 00:24:48.433 user 0m59.891s 00:24:48.433 sys 0m5.698s 00:24:48.433 19:31:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:48.433 19:31:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:48.433 ************************************ 00:24:48.433 END TEST nvmf_bdevperf 00:24:48.433 ************************************ 00:24:48.433 19:31:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:48.433 19:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.694 ************************************ 00:24:48.694 START TEST nvmf_target_disconnect 00:24:48.694 ************************************ 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:48.694 * Looking for test storage... 00:24:48.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.694 --rc genhtml_branch_coverage=1 00:24:48.694 --rc genhtml_function_coverage=1 00:24:48.694 --rc genhtml_legend=1 00:24:48.694 --rc geninfo_all_blocks=1 00:24:48.694 --rc geninfo_unexecuted_blocks=1 00:24:48.694 00:24:48.694 ' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.694 --rc genhtml_branch_coverage=1 00:24:48.694 --rc genhtml_function_coverage=1 00:24:48.694 --rc genhtml_legend=1 00:24:48.694 --rc geninfo_all_blocks=1 00:24:48.694 --rc geninfo_unexecuted_blocks=1 00:24:48.694 00:24:48.694 ' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.694 --rc genhtml_branch_coverage=1 00:24:48.694 --rc genhtml_function_coverage=1 00:24:48.694 --rc genhtml_legend=1 00:24:48.694 --rc geninfo_all_blocks=1 00:24:48.694 --rc geninfo_unexecuted_blocks=1 00:24:48.694 00:24:48.694 ' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:48.694 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.694 --rc genhtml_branch_coverage=1 00:24:48.694 --rc genhtml_function_coverage=1 00:24:48.694 --rc genhtml_legend=1 00:24:48.694 --rc geninfo_all_blocks=1 00:24:48.694 --rc geninfo_unexecuted_blocks=1 00:24:48.694 00:24:48.694 ' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.694 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.695 19:31:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:53.972 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:53.972 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:53.972 Found net devices under 0000:31:00.0: cvl_0_0 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:53.972 Found net devices under 0000:31:00.1: cvl_0_1 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.972 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:24:53.973 00:24:53.973 --- 10.0.0.2 ping statistics --- 00:24:53.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.973 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:24:53.973 00:24:53.973 --- 10.0.0.1 ping statistics --- 00:24:53.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.973 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:53.973 ************************************ 00:24:53.973 START TEST nvmf_target_disconnect_tc1 00:24:53.973 ************************************ 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:53.973 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:54.234 [2024-11-26 19:31:27.888662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.234 [2024-11-26 19:31:27.888722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf78d00 with addr=10.0.0.2, port=4420 00:24:54.234 [2024-11-26 19:31:27.888750] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:54.234 [2024-11-26 19:31:27.888761] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:54.234 [2024-11-26 19:31:27.888769] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:24:54.234 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:54.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:54.234 Initializing NVMe Controllers 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:54.234 00:24:54.234 real 0m0.106s 00:24:54.234 user 0m0.053s 00:24:54.234 sys 0m0.052s 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:54.234 ************************************ 00:24:54.234 END TEST nvmf_target_disconnect_tc1 00:24:54.234 ************************************ 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:54.234 ************************************ 00:24:54.234 START TEST nvmf_target_disconnect_tc2 00:24:54.234 ************************************ 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3903671 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3903671 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3903671 ']' 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:54.234 19:31:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:54.234 [2024-11-26 19:31:27.986889] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:54.234 [2024-11-26 19:31:27.986933] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.234 [2024-11-26 19:31:28.071430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.494 [2024-11-26 19:31:28.108210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.494 [2024-11-26 19:31:28.108242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.494 [2024-11-26 19:31:28.108251] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.494 [2024-11-26 19:31:28.108257] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.494 [2024-11-26 19:31:28.108262] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.494 [2024-11-26 19:31:28.109799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:54.494 [2024-11-26 19:31:28.109952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:54.494 [2024-11-26 19:31:28.110108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:54.494 [2024-11-26 19:31:28.110123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.063 Malloc0 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.063 [2024-11-26 19:31:28.842970] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.063 [2024-11-26 19:31:28.871297] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3903864 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:55.063 19:31:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.630 19:31:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3903671 00:24:57.630 19:31:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Write completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 Read completed with error (sct=0, sc=8) 00:24:57.630 starting I/O failed 00:24:57.630 [2024-11-26 19:31:30.899456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:57.630 [2024-11-26 19:31:30.899842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-11-26 19:31:30.899863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.630 qpair failed and we were unable to recover it. 00:24:57.630 [2024-11-26 19:31:30.900339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-11-26 19:31:30.900378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.630 qpair failed and we were unable to recover it. 00:24:57.630 [2024-11-26 19:31:30.900692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-11-26 19:31:30.900708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.630 qpair failed and we were unable to recover it. 00:24:57.630 [2024-11-26 19:31:30.900988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.630 [2024-11-26 19:31:30.901000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.630 qpair failed and we were unable to recover it. 00:24:57.630 [2024-11-26 19:31:30.901390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.901428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.901708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.901723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.902047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.902059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.902398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.902410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.902680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.902692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.902951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.902964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.903281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.903293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.903619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.903631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.903898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.903910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.904244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.904257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.904564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.904577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.904866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.904877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.905124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.905136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.905381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.905393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.905715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.905727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.905993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.906005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.906221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.906234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.906414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.906425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.906628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.906642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.906966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.906978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.907192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.907205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.907460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.907472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.907772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.907784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.907985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.907997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.908364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.908376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.908650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.908662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.908949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.908961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.909286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.909299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.909592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.909604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.909876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.909890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.910081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.910094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.910405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.910417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.910704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.910716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.911036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.911048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.911116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.911127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.911437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.911449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.911768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.911780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.911941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.911954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.912264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.631 [2024-11-26 19:31:30.912277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.631 qpair failed and we were unable to recover it. 00:24:57.631 [2024-11-26 19:31:30.912572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.912584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.912862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.912874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.912939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.912950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.913142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.913154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.913501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.913514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.913811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.913822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.914126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.914140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.914388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.914400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.914695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.914707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.914894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.914908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.915074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.915086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.915521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.915534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.915807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.915819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.916095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.916111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.916455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.916467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.916718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.916730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.917057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.917069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.917354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.917366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.917651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.917663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.917910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.917921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.918214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.918226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.918515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.918527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.918753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.918763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.919043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.919054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.919360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.919371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.919543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.919554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.919852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.919863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.920156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.920168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.920484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.920496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.920820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.920831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.920965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.920976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.921145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.921158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.921470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.921481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.921764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.921777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.922052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.922063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.922361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.922372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.922673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.922685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.922972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.922983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.632 qpair failed and we were unable to recover it. 00:24:57.632 [2024-11-26 19:31:30.923267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.632 [2024-11-26 19:31:30.923279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.923551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.923562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.923877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.923889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.924061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.924072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.924374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.924386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.924732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.924743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.925068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.925080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.925380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.925393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.925678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.925689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.925990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.926001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.926303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.926314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.926681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.926692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.926881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.926891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.927185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.927197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.927488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.927499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.927680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.927692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.927985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.927996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.928324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.928335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.928617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.928628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.928912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.928924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.929196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.929208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.929397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.929408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.929583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.929594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.929907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.929918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.930210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.930222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.930564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.930575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.930872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.930884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.931177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.931188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.931487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.931498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.931777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.931788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.932074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.932096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.932403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.932415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.932695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.932706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.933072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.933084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.933387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.933398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.933731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.933743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.934029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.934041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.934233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.934247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.633 [2024-11-26 19:31:30.934534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.633 [2024-11-26 19:31:30.934545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.633 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.934861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.934874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.935154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.935166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.935521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.935533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.935816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.935827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.936116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.936127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.936410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.936421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.936710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.936722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.937023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.937035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.937331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.937343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.937732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.937744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.938042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.938054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.938355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.938367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.938652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.938664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.938990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.939003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.939314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.939326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.939611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.939624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.939961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.939972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.940270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.940282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.940562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.940574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.940901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.940913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.941227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.941240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.941415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.941427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.941726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.941737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.942046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.942058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.942379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.942395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.942718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.942729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.943029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.943041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.943323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.943335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.943621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.943633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.943839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.943850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.944168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.944181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.944479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.944490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.944779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.944789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.945083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.945094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.945401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.945413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.945700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.945711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.946001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.946013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.946332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.634 [2024-11-26 19:31:30.946344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.634 qpair failed and we were unable to recover it. 00:24:57.634 [2024-11-26 19:31:30.946623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.946634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.946925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.946936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.947263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.947276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.947574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.947585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.947864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.947875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.948061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.948071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.948385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.948397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.948683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.948694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.948989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.949001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.949327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.949338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.949620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.949633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.949919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.949931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.950234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.950246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.950552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.950567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.950750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.950760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.951081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.951092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.951427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.951438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.951732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.951744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.952023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.952035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.952338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.952349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.952635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.952646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.952958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.952970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.953241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.953253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.953540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.953550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.953843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.953854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.954134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.954145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.954458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.954469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.954744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.954755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.955048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.955058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.955350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.955362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.955633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.955644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.955953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.955965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.956264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.956275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.635 [2024-11-26 19:31:30.956492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.635 [2024-11-26 19:31:30.956503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.635 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.956821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.956832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.957131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.957143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.957452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.957463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.957744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.957756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.958034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.958046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.958337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.958349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.958654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.958666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.958995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.959007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.959307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.959319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.959628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.959640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.959809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.959822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.960088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.960110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.960381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.960393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.960691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.960702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.960973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.960984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.961277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.961288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.961556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.961566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.961833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.961845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.962133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.962144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.962453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.962464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.962736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.962747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.963021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.963032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.963345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.963357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.963623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.963634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.963940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.963952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.964119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.964131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.964453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.964464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.964747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.964760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.965035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.965046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.965336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.965347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.965634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.965645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.965922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.965932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.966129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.966140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.966425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.636 [2024-11-26 19:31:30.966436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.636 qpair failed and we were unable to recover it. 00:24:57.636 [2024-11-26 19:31:30.966727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.966738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.967008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.967018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.967319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.967330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.967611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.967622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.967900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.967911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.968203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.968215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.968506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.968517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.968805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.968816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.969118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.969131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.969429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.969440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.969718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.969730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.970024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.970035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.970220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.970231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.970440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.970454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.970730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.970741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.971016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.971027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.971195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.971207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.971498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.971509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.971824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.971835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.972133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.972153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.972461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.972471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.972756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.972768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.972937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.972949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.973219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.973231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.973515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.973526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.973829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.973841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.974140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.974152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.974418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.974429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.974718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.974731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.975043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.975054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.975345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.975356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.975621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.975633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.975938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.975950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.976143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.976155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.976419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.976430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.976747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.976758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.977052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.977064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.977334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.977345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.977631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.637 [2024-11-26 19:31:30.977643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.637 qpair failed and we were unable to recover it. 00:24:57.637 [2024-11-26 19:31:30.977911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.977923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.978243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.978258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.978554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.978565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.978917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.978928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.979200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.979212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.979494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.979505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.979829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.979841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.980112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.980123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.980311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.980323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.980628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.980640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.980932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.980943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.981119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.981130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.981501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.981512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.981844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.981857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.982135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.982147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.982455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.982466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.982738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.982749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.983033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.983044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.983334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.983345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.983619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.983630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.983922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.983933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.984218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.984230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.984527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.984538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.984810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.984820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.985003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.985015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.985331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.985343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.985656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.985667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.985975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.985986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.986276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.986290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.986609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.986621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.986918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.986930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.987289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.987302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.987584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.987596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.987998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.988010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.988186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.988196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.638 qpair failed and we were unable to recover it. 00:24:57.638 [2024-11-26 19:31:30.988486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.638 [2024-11-26 19:31:30.988497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.988793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.988803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.989088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.989099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.989420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.989433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.989707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.989718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.990016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.990029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.990338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.990349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.990653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.990665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.990957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.990968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.991287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.991300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.991584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.991596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.991869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.991880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.992165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.992176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.992482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.992494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.992766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.992778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.993057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.993068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.993239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.993251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.993521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.993533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.993821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.993832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.994021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.994033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.994304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.994317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.994628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.994640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.994999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.995011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.995235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.995247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.995566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.995578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.995898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.995910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.996208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.996220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.996512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.996523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.996804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.996815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.997111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.997124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.997455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.997466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.997749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.997760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.998050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.998060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.998374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.639 [2024-11-26 19:31:30.998388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.639 qpair failed and we were unable to recover it. 00:24:57.639 [2024-11-26 19:31:30.998720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:30.998731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:30.999028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:30.999040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:30.999200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:30.999212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:30.999484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:30.999495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:30.999811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:30.999822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.000116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.000128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.000466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.000477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.000784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.000796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.001077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.001089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.001380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.001392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.001664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.001675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.001943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.001955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.002275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.002287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.002604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.002616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.002782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.002795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.003087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.003099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.003393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.003404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.003681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.003692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.003997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.004008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.004341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.004352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.004640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.004653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.004941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.004951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.005280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.005293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.005583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.005593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.005840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.005850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.006154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.006165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.006489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.006500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.006679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.006693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.006980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.006991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.007172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.007185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.007461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.007471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.007734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.007746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.008052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.008062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.008278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.008289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.008556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.008567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.640 [2024-11-26 19:31:31.008878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.640 [2024-11-26 19:31:31.008889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.640 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.009165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.009177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.009373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.009385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.009707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.009719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.010021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.010033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.010353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.010364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.010666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.010677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.010970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.010982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.011291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.011302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.011582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.011593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.011857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.011868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.012061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.012071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.012390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.012401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.012713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.012724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.013005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.013016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.013308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.013319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.013491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.013502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.013806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.013817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.013998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.014009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.014315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.014329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.014602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.014613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.014774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.014786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.015117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.015128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.015460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.015471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.015800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.015812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.016121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.016132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.016544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.016555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.016858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.016870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.017066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.017078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.017392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.017404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.017739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.017750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.018025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.018037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.018365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.018377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.018661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.018673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.018971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.018983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.019384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.019396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.019558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.019570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.019791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.019804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.020110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.641 [2024-11-26 19:31:31.020122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.641 qpair failed and we were unable to recover it. 00:24:57.641 [2024-11-26 19:31:31.020420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.020432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.020614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.020626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.020910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.020922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.021213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.021225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.021517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.021529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.021734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.021745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.022056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.022067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.022394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.022406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.022709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.022721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.023037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.023047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.023399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.023410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.023699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.023710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.023993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.024004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.024320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.024332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.024618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.024629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.024929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.024941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.025131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.025143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.025465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.025476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.025758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.025769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.025956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.025967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.026296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.026308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.026609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.026620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.026889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.026901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.027315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.027327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.027602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.027613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.027841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.027852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.028225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.028238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.028544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.028556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.028897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.028908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.029183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.029194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.029479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.029491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.029785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.029795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.030067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.030078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.030348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.030361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.030646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.030657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.642 [2024-11-26 19:31:31.030940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.642 [2024-11-26 19:31:31.030951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.642 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.031278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.031289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.031612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.031623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.031816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.031828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.032105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.032116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.032448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.032459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.032780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.032793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.033082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.033093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.033300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.033311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.033599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.033610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.033773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.033782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.034075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.034086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.034410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.034423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.034691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.034706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.035022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.035034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.035361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.035372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.035679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.035690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.036002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.036012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.036410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.036422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.036689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.036701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.036993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.037005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.037184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.037195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.037529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.037541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.037825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.037836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.038111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.038123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.038434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.038445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.038719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.038730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.039011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.039022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.039294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.039306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.039473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.039484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.039774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.039784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.039947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.039959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.040272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.040285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.040472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.040483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.040837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.040849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.041178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.041190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.041570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.041582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.041717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.041728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.042033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.643 [2024-11-26 19:31:31.042044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.643 qpair failed and we were unable to recover it. 00:24:57.643 [2024-11-26 19:31:31.042394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.042405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.042682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.042696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.042980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.042990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.043174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.043185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.043559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.043570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.043882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.043893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.044176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.044188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.044492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.044504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.044828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.044839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.045221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.045232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.045576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.045588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.045881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.045892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.046229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.046240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.046614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.046625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.046958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.046970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.047167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.047179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.047464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.047474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.047617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.047627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.047921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.047932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.048161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.048171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.048360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.048372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.048677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.048688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.048967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.048978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.049357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.049368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.049666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.049679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.049853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.049864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.050172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.050183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.050501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.050511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.050840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.050856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.051108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.051120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.051460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.051471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.051739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.051750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.052059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.052071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.052237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.052249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.052528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.644 [2024-11-26 19:31:31.052539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.644 qpair failed and we were unable to recover it. 00:24:57.644 [2024-11-26 19:31:31.052854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.052866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.053099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.053117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.053312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.053324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.053643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.053653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.053947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.053958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.054168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.054179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.054470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.054480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.054790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.054801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.054986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.054997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.055295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.055306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.055568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.055579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.055870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.055881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.056119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.056131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.056404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.056415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.056738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.056750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.056804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.056816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.057084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.057095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.057315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.057327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.057614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.057625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.057945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.057956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.058266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.058278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.058614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.058626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.058930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.058941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.059267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.059278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.059455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.059466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.059662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.059673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.059965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.059976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.060245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.060257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.060467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.060477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.060654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.060666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.061018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.061029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.061403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.061415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.061700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.061711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.062018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.062029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.062332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.062344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.062651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.062662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.062970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.062982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.063186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.645 [2024-11-26 19:31:31.063199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.645 qpair failed and we were unable to recover it. 00:24:57.645 [2024-11-26 19:31:31.063510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.063522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.063807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.063818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.064031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.064043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.064380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.064392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.064683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.064695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.064900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.064912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.065108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.065120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.065441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.065452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.065766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.065778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.066056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.066067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.066457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.066469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.066786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.066799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.067109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.067122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.067403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.067414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.067715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.067727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.068022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.068033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.068457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.068469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.068738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.068749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.069058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.069069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.069236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.069248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.069448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.069460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.069770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.069781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.070092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.070107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.070330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.070343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.070619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.070630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.070937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.070948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.071118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.071129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.071435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.071446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.071696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.071707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.072012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.072023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.072117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.072129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.072314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.072325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.072527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.072538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.072730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.072742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.073040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.073051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.073384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.073396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.073700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.073712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.073992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.074003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.074330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.646 [2024-11-26 19:31:31.074341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.646 qpair failed and we were unable to recover it. 00:24:57.646 [2024-11-26 19:31:31.074615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.074627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.074910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.074921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.075231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.075243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.075548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.075559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.075863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.075874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.076133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.076144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.076414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.076427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.076775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.076786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.077088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.077104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.077442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.077454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.077810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.077821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.078131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.078147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.078449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.078460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.078790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.078801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.079077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.079087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.079435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.079446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.079762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.079774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.080032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.080042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.080372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.080385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.080694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.080705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.081014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.081026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.081361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.081372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.081653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.081673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.081954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.081966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.082166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.082177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.082465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.082476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.082715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.082726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.083027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.083038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.083240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.083252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.083588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.083599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.083902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.083913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.084267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.084279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.084608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.084619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.084893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.084904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.085302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.085313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.085622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.085634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.085964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.085975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.086294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.086307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.647 qpair failed and we were unable to recover it. 00:24:57.647 [2024-11-26 19:31:31.086628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.647 [2024-11-26 19:31:31.086639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.086962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.086973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.087182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.087194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.087473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.087485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.087629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.087639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.087865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.087876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.088168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.088179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.088440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.088451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.088762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.088774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.089056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.089067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.089426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.089437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.089751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.089763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.090081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.090093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.090331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.090343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.090639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.090652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.090925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.090936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.091306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.091317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.091620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.091632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.091929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.091940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.092154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.092165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.092434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.092445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.092746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.092758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.093041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.093052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.093145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.093155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.093513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.093524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.093738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.093749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.094037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.094048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.094377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.094389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.094682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.094693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.094986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.094997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.095064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.095075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.095327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.095338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.095663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.095675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.095898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.095909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.096136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.096147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.096460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.096471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.096788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.096799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.097062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.097073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.097488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.648 [2024-11-26 19:31:31.097499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.648 qpair failed and we were unable to recover it. 00:24:57.648 [2024-11-26 19:31:31.097806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.097817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.098129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.098141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.098520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.098532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.098807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.098818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.099124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.099135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.099423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.099434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.099707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.099717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.099928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.099939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.100183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.100194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.100484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.100496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.100765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.100776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.100965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.100976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.101197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.101208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.101550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.101562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.101823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.101834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.102037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.102049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.102423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.102434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.102799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.102810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.103082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.103093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.103352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.103363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.103670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.103680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.103963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.103974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.104295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.104307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.104555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.104566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.104834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.104845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.105024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.105036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.105385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.105396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.105649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.105659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.105968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.105979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.106323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.106338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.106510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.106521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.106822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.106833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.107142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.107153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.107487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.649 [2024-11-26 19:31:31.107498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.649 qpair failed and we were unable to recover it. 00:24:57.649 [2024-11-26 19:31:31.107663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.107673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.107892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.107903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.108213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.108224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.108534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.108545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.108859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.108870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.109206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.109218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.109416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.109427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.109749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.109761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.110055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.110067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.110409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.110420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.110693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.110704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.111028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.111040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.111375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.111387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.111656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.111667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.112017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.112029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.112377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.112389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.112599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.112611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.112908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.112919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.113096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.113112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.113441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.113452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.113745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.113757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.113999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.114010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.114390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.114405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.114674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.114685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.114954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.114964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.115285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.115297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.115476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.115488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.115784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.115795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.116082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.116093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.116433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.116444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.116716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.116727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.116956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.116968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.117362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.117373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.117517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.117528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.117876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.117886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.118095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.118115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.118439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.118450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.118712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.118722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.650 [2024-11-26 19:31:31.118986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.650 [2024-11-26 19:31:31.118997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.650 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.119413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.119424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.119726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.119739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.120034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.120044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.120343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.120363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.120665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.120676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.120972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.120982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.121309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.121321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.121646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.121658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.121941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.121952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.122133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.122144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.122318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.122330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.122634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.122645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.123012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.123024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.123333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.123345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.123685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.123696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.124009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.124021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.124290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.124302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.124590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.124601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.124912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.124922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.125091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.125106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.125436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.125447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.125729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.125739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.126040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.126050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.126307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.126318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.126550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.126564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.126881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.126892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.127062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.127073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.127384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.127395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.127737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.127749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.128017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.128028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.128291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.128302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.128585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.128597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.128938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.128950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.129277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.129288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.129573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.129584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.129875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.129886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.130291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.130302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.130615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.130627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.651 qpair failed and we were unable to recover it. 00:24:57.651 [2024-11-26 19:31:31.130847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.651 [2024-11-26 19:31:31.130858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.131059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.131069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.131268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.131279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.131602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.131613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.131688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.131697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.131986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.131997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.132312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.132324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.132519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.132529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.132855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.132866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.132929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.132938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.133006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.133016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.133333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.133344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.133662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.133673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.133972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.133986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.134324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.134336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.134625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.134637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.134930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.134940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.135401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.135412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.135721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.135733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.135868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.135880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.136021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.136032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.136329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.136340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.136529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.136541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.136827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.136839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.136981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.136992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.137305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.137316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.137615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.137627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.137838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.137850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.138154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.138165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.138451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.138461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.138751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.138762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.139071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.139083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.139453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.139465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.139747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.139759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.139916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.139928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.140140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.140151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.140425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.140435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.140764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.140775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.141113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.652 [2024-11-26 19:31:31.141125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.652 qpair failed and we were unable to recover it. 00:24:57.652 [2024-11-26 19:31:31.141434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.141444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.141761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.141774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.141969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.141980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.142168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.142180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.142554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.142566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.142777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.142788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.142978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.142989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.143199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.143211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.143612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.143624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.143932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.143943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.144298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.144310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.144605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.144615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.144888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.144900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.145189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.145200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.145510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.145522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.145695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.145707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.146026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.146037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.146326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.146337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.146633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.146643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.146946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.146958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.147126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.147138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.147451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.147462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.147766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.147776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.148106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.148117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.148450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.148461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.148748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.148759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.148931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.148943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.149164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.149177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.149357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.149368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.149685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.149696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.149986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.149996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.150339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.150350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.150593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.150604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.150890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.150900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.653 [2024-11-26 19:31:31.151259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.653 [2024-11-26 19:31:31.151271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.653 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.151557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.151568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.151907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.151918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.152208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.152219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.152535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.152546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.152823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.152834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.153154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.153165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.153269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.153279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Write completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 Read completed with error (sct=0, sc=8) 00:24:57.654 starting I/O failed 00:24:57.654 [2024-11-26 19:31:31.154021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:57.654 [2024-11-26 19:31:31.154615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.154718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.155135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.155174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.155430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.155462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.155773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.155788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.156085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.156096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.156444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.156455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.156766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.156778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.157092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.157111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.157398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.157409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.157705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.157716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.157866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.157879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.158124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.158136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.158364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.158376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.158661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.158673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.158987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.158999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.159182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.159193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.159467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.159477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.159866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.159876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.160223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.160235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.160460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.160471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.654 qpair failed and we were unable to recover it. 00:24:57.654 [2024-11-26 19:31:31.160761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.654 [2024-11-26 19:31:31.160774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.161090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.161112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.161496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.161508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.161699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.161709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.162012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.162023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.162395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.162408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.162678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.162690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.162886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.162898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.163210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.163222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.163442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.163453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.163736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.163748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.164057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.164068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.164413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.164425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.164708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.164718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.165033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.165044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.165345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.165357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.165632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.165643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.165918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.165929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.166117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.166129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.166456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.166467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.166622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.166632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.166828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.166841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.167138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.167150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.167443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.167454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.167624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.167637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.167899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.167910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.168296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.168307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.168529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.168545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.168857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.168868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.169128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.169139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.169351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.169364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.169552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.169563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.169883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.169894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.170180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.170193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.170429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.170440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.170693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.170704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.171010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.171021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.171324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.171336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.655 [2024-11-26 19:31:31.171628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.655 [2024-11-26 19:31:31.171642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.655 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.171923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.171934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.172252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.172265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.172590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.172602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.172898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.172909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.173234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.173245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.173546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.173558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.173847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.173858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.174162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.174175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.174471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.174482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.174770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.174781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.175049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.175060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.175379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.175390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.175678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.175689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.175936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.175947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.176108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.176121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.176334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.176349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.176670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.176681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.176976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.176987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.177207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.177219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.177507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.177518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.177708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.177718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.178015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.178027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.178139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.178149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.178491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.178504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.178754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.178767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.179104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.179116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.179399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.179411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.179725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.179738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.179924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.179936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.180116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.180127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.180362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.180373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.180580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.180590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.180787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.180798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.181131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.181142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.181523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.181534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.181827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.181838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.181980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.181992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.182208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.656 [2024-11-26 19:31:31.182220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.656 qpair failed and we were unable to recover it. 00:24:57.656 [2024-11-26 19:31:31.182559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.182570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.182858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.182870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.183043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.183054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.183354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.183366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.183672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.183682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.183985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.183997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.184117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.184128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.184488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.184500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.184759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.184770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.184956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.184968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.185334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.185346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.185704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.185715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.186023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.186035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.186430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.186442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.186737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.186748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.186948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.186959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.187132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.187144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.187460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.187471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.187757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.187770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.188060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.188072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.188367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.188379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.188654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.188664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.188836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.188848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.189033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.189045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.189402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.189414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.189698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.189709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.190000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.190013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.190420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.190432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.190730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.190742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.190924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.190935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.191243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.191254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.191507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.191518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.191681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.191692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.192000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.192011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.192345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.192356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.192675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.192687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.192867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.192879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.193150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.193161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.657 [2024-11-26 19:31:31.193422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.657 [2024-11-26 19:31:31.193433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.657 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.193690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.193702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.193975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.193985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.194314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.194326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.194651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.194663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.194833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.194844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.195010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.195021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.195322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.195336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.195489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.195501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.195710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.195722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.195991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.196003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.196339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.196351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.196642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.196653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.196821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.196833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.197177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.197189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.197467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.197480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.197881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.197893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.198185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.198197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.198476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.198488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.198676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.198688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.198995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.199007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.199217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.199232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.199579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.199590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.199875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.199885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.200116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.200128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.200416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.200426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.200725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.200736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.200925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.200935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.201231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.201243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.201495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.201506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.201861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.201872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.202081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.202092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.202325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.202337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.202623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.202633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.202908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.658 [2024-11-26 19:31:31.202922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.658 qpair failed and we were unable to recover it. 00:24:57.658 [2024-11-26 19:31:31.203125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.203137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.203453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.203463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.203758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.203769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.203918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.203929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.204248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.204259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.204533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.204543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.204760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.204770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.204963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.204973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.205165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.205176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.205561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.205572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.205869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.205880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.206203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.206214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.206430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.206441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.206852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.206863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.207196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.207207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.207343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.207355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.207635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.207646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.207923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.207934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.208272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.208283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.208546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.208557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.208836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.208848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.209179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.209190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.209482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.209493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.209671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.209682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.209997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.210009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.210329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.210340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.210623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.210637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.210951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.210962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.211287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.211300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.211673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.211684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.211742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.211751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.211980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.211990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.212175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.212187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.212353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.212364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.212550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.212560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.212724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.212734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.212917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.212928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.213310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.213322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.659 [2024-11-26 19:31:31.213584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.659 [2024-11-26 19:31:31.213595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.659 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.213884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.213895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.214135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.214146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.214454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.214465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.214783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.214794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.215083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.215094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.215299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.215310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.215645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.215656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.215965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.215976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.216299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.216310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.216654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.216666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.216721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.216732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.216931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.216941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.217117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.217128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.217295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.217306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.217617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.217628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.217922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.217932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.218270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.218281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.218569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.218580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.218829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.218839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.219179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.219190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.219501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.219513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.219905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.219917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.220161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.220172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.220393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.220404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.220666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.220677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.220970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.220981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.221348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.221359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.221529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.221539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.221843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.221856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.222185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.222197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.222565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.222576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.222751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.222762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.223061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.223072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.223377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.223389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.223656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.223668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.223976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.223986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.224160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.224171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.224523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.660 [2024-11-26 19:31:31.224534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.660 qpair failed and we were unable to recover it. 00:24:57.660 [2024-11-26 19:31:31.224791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.224801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.225090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.225111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.225336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.225347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.225652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.225663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.225922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.225933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.226235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.226247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.226588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.226600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.226891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.226903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.227201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.227213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.227564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.227575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.227889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.227902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.228129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.228140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.228344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.228354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.228617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.228628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.228934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.228945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.229292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.229304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.229622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.229633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.229814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.229827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.230048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.230058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.230247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.230258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.230583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.230594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.230947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.230959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.231284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.231295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.231494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.231504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.231689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.231700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.231843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.231855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.232149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.232161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.232516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.232528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.232854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.232866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.233072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.233084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.233311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.233323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.233616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.233627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.233932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.233944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.234254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.234265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.234559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.234570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.234866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.234877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.235176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.235187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.235491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.235501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.661 [2024-11-26 19:31:31.235792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.661 [2024-11-26 19:31:31.235803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.661 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.236121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.236132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.236458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.236469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.236736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.236748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.237058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.237070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.237281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.237293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.237511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.237524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.237813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.237824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.238128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.238140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.238256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.238267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.238605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.238615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.238923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.238933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.239274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.239285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.239474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.239484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.239761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.239773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.239912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.239925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.240146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.240158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.240508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.240520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.240823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.240834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.241071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.241082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.241416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.241428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.241740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.241753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.242041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.242051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.242263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.242274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.242548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.242557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.242855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.242865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.243166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.243177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.243491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.243502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.243788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.243798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.244124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.244135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.244328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.244338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.244604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.244613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.244916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.244926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.245246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.245257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.245463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.245474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.662 qpair failed and we were unable to recover it. 00:24:57.662 [2024-11-26 19:31:31.245791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.662 [2024-11-26 19:31:31.245802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.246095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.246108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.246414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.246424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.246673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.246683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.246861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.246870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.247188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.247199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.247519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.247529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.247797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.247807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.248089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.248104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.248338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.248347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.248644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.248655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.248946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.248955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.249197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.249208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.249532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.249541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.249827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.249837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.249983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.249993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.250315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.250325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.250519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.250528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.250801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.250812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.251090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.251104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.251407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.251417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.251747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.251756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.252081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.252090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.252382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.252392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.252686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.252696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.252978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.252988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.253311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.253322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.253579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.253589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.253874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.253884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.254072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.254082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.254460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.254470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.254758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.254767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.255063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.255072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.255400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.255410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.255703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.255713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.256028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.256038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.256340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.256349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.256541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.256551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.256886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.663 [2024-11-26 19:31:31.256896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.663 qpair failed and we were unable to recover it. 00:24:57.663 [2024-11-26 19:31:31.257259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.257271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.257599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.257610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.257934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.257944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.258247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.258257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.258529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.258539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.258865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.258875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.259156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.259166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.259466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.259476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.259758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.259769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.260105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.260116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.260465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.260476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.260810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.260820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.261126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.261137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.261484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.261494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.261779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.261789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.262066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.262076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.262381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.262391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.262715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.262725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.263051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.263061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.263382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.263392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.263670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.263681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.263956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.263967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.264281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.264291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.264572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.264581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.264862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.264872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.265146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.265157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.265488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.265498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.265831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.265842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.266005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.266014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.266318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.266328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.266620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.266630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.266944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.266955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.267262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.267272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.267449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.267460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.267783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.267793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.268009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.268019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.268354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.268364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.268713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.268724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.664 [2024-11-26 19:31:31.269033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.664 [2024-11-26 19:31:31.269044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.664 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.269462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.269473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.269799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.269810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.270129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.270141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.270472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.270483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.270785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.270796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.271096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.271112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.271400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.271410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.271719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.271729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.272021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.272031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.272348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.272359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.272776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.272787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.273068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.273079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.273380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.273391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.273555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.273566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.273873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.273883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.274159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.274173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.274493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.274504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.274809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.274819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.275111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.275121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.275508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.275518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.275842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.275852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.276133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.276144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.276457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.276468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.276773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.276784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.277109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.277120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.277437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.277447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.277712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.277723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.278040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.278052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.278321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.278331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.278646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.278656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.278888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.278898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.279099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.279114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.279451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.279463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.279745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.279756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.280041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.280051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.280372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.280383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.280678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.280688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.280983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.665 [2024-11-26 19:31:31.280993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.665 qpair failed and we were unable to recover it. 00:24:57.665 [2024-11-26 19:31:31.281294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.281306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.281616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.281626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.281922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.281931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.282282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.282293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.282580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.282589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.282873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.282882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.283225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.283236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.283547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.283556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.283842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.283852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.284138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.284148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.284465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.284475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.284773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.284784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.285076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.285085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.285385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.285395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.285566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.285575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.285852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.285863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.286157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.286168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.286388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.286398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.286720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.286734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.287026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.287037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.287204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.287215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.287394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.287404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.287729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.287738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.288054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.288064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.288356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.288367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.288684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.288695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.288977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.288987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.289340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.289350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.289641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.289651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.289932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.289941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.290236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.290246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.290677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.290687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.290993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.291003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.291321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.291331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.291620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.291629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.291914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.291924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.292246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.292256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.292550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.292560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.292836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.666 [2024-11-26 19:31:31.292846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.666 qpair failed and we were unable to recover it. 00:24:57.666 [2024-11-26 19:31:31.293138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.293157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.293367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.293377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.293667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.293676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.293967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.293976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.294158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.294168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.294483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.294493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.294787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.294799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.295124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.295135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.295424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.295433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.295748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.295758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.296046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.296056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.296361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.296372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.296667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.296677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.296984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.296994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.297312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.297322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.297606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.297616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.297950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.297960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.298239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.298249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.298460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.298469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.298665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.298674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.298966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.298976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.299290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.299300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.299581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.299591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.299876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.299886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.300201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.300211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.300417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.300427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.300910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.300997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.301522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.301612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.302062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.302116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.302468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.302478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.302763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.302773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.303134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.303144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.303490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.303500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.303795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.303807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.667 [2024-11-26 19:31:31.304131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.667 [2024-11-26 19:31:31.304141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.667 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.304514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.304524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.304867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.304877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.305216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.305226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.305385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.305395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.305675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.305685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.306055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.306065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.306355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.306366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.306726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.306736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.307033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.307042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.307262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.307272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.307600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.307610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.307936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.307946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.308146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.308157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.308418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.308427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.308761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.308771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.308947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.308956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.309278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.309288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.309611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.309620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.309911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.309920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.310261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.310272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.310518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.310528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.310855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.310865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.311199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.311210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.311476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.311485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.311800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.311810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.312108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.312118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.312417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.312427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.312635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.312645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.312855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.312864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.313171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.313181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.313487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.313497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.313687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.313696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.313996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.314005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.314290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.314300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.314603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.314612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.314909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.314919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.315202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.315212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.315522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.668 [2024-11-26 19:31:31.315532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.668 qpair failed and we were unable to recover it. 00:24:57.668 [2024-11-26 19:31:31.315845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.315854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.316195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.316205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.316499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.316508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.316796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.316806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.317117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.317127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.317432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.317442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.317741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.317750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.318043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.318053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.318335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.318345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.318666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.318676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.319019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.319029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.319330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.319340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.319626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.319636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.319970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.319979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.320264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.320274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.320634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.320643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.320956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.320966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.321273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.321283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.321626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.321635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.321919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.321929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.322231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.322241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.322529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.322538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.322828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.322837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.323169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.323179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.323542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.323551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.323855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.323865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.324145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.324156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.324470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.324479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.324835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.324846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.325157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.325167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.325472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.325481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.325757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.325766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.326132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.326143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.326421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.326430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.326720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.326729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.327051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.327061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.327342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.327352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.327644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.327653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.327965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.327975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.328267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.328277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.669 qpair failed and we were unable to recover it. 00:24:57.669 [2024-11-26 19:31:31.328572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.669 [2024-11-26 19:31:31.328581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.328884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.328893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.329257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.329267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.329444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.329455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.329790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.329800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.330096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.330115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.330431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.330441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.330728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.330737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.331029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.331039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.331344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.331354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.331657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.331667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.332059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.332070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.332352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.332363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.332676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.332685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.333030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.333040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.333333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.333345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.333688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.333698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.334009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.334019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.334303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.334313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.334621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.334630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.334818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.334828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.335120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.335130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.335443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.335453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.335739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.335748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.335888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.335898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.336188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.336198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.336531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.336540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.336731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.336740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.337049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.337058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.337401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.337411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.337698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.337708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.338023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.338032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.338320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.338331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.338656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.338666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.338954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.338964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.339329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.339340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.339641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.339650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.339985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.339994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.340293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.340303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.340481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.340491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.340798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.340808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.341117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.341128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.670 qpair failed and we were unable to recover it. 00:24:57.670 [2024-11-26 19:31:31.341434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.670 [2024-11-26 19:31:31.341446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.341734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.341744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.341909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.341918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.342255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.342265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.342580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.342589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.342909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.342918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.343203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.343214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.343542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.343552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.343872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.343881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.344191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.344201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.344484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.344494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.344784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.344794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.345088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.345097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.345461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.345471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.345753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.345763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.346068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.346077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.346389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.346399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.346681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.346690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.346993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.347003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.347339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.347349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.347641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.347650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.347847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.347856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.348139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.348150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.348463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.348473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.348753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.348763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.349154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.349164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.349481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.349491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.349737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.349747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.350059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.350069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.350370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.350380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.350553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.350563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.350869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.350878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.351161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.351171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.351454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.351463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.351768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.351777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.352059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.352068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.352359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.352369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.352673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.352683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.671 qpair failed and we were unable to recover it. 00:24:57.671 [2024-11-26 19:31:31.353015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.671 [2024-11-26 19:31:31.353024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.353222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.353233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.353431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.353440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.353600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.353610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.353935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.353944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.354245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.354254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.354538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.354548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.354887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.354897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.355211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.355221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.355501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.355511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.355792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.355802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.356098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.356111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.356405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.356415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.356688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.356698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.356929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.356939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.357251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.357261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.357593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.357602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.357876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.357886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.358191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.358201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.358482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.358492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.358786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.358796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.359090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.359103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.359405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.359415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.359693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.359703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.359999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.360008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.360290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.360300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.360587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.360596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.360942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.360952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.361290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.361300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.361616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.361625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.361902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.361914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.362267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.362278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.362587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.362597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.362907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.362917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.363092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.363105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.363460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.363470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.363753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.363763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.364048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.364057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.364350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.364360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.364655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.364665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.364974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.364984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.365183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.365194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.365497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.365507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.365797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.365806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.366097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.672 [2024-11-26 19:31:31.366110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.672 qpair failed and we were unable to recover it. 00:24:57.672 [2024-11-26 19:31:31.366399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.366409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.366702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.366712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.366997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.367007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.367369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.367379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.367657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.367667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.367959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.367969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.368256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.368266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.368438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.368448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.368625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.368636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.368932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.368942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.369260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.369270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.369557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.369567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.369871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.369883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.370173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.370183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.370537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.370546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.370827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.370836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.371124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.371134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.371408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.371418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.371733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.371742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.372076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.372086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.372454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.372465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.372761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.372771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.373060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.373069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.373399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.373409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.373748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.373757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.374053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.374063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.374406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.374417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.374700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.374709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.374999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.375009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.375319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.375329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.375666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.375675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.375958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.375967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.376243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.376253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.376543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.376552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.376824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.376834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.377128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.377139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.377448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.377458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.377752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.377762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.378045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.378055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.378439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.378449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.378599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.378609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.378891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.378901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.379190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.379200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.379405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.379415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.379698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.673 [2024-11-26 19:31:31.379707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.673 qpair failed and we were unable to recover it. 00:24:57.673 [2024-11-26 19:31:31.380001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.380011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.380299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.380309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.380594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.380603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.380885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.380895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.381181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.381191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.381482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.381492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.381778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.381787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.382068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.382078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.382379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.382390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.382676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.382686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.382974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.382984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.383281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.383291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.383603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.383612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.383901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.383911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.384223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.384233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.384534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.384544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.384835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.384844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.385126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.385136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.385435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.385444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.385763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.385773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.386094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.386106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.386404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.386414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.386760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.386769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.387093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.387108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.387407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.387417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.387710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.387719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.387893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.387905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.388227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.388237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.388534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.388543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.388841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.388851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.389195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.389205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.389394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.389404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.389684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.389694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.390025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.390034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.390364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.390374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.390655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.390667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.390983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.390992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.391332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.391342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.391617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.391627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.391791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.391801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.392094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.392107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.392401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.392411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.392599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.392609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.392904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.392914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.393221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.674 [2024-11-26 19:31:31.393231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.674 qpair failed and we were unable to recover it. 00:24:57.674 [2024-11-26 19:31:31.393552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.393561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.393836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.393845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.394137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.394147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.394436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.394445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.394759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.394769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.395063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.395073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.395391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.395402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.395699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.395709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.396015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.396025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.396292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.396302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.396585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.396594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.396886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.396896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.397263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.397273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.397585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.397594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.397893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.397902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.398214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.398224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.398573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.398583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.398912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.398924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.399248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.399258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.399578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.399588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.399882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.399893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.400176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.400186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.400576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.400586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.400865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.400875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.401220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.401231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.401397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.401406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.401690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.401699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.401982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.401992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.402280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.402290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.402579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.402589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.402873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.402883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.403086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.403097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.403396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.403406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.403699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.403709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.404033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.404043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.404264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.404275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.404608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.404619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.404941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.404951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.405250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.405261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.675 [2024-11-26 19:31:31.405467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.675 [2024-11-26 19:31:31.405478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.675 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.405785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.405795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.406092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.406107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.406405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.406415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.406750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.406760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.407048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.407060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.407400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.407411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.407744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.407754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.408085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.408095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.408454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.408464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.408758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.408767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.409062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.409073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.409366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.409377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.409672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.409682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.409741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.409751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.410049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.410058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.410356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.410367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.410555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.410565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.410858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.410867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.411278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.411289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.411624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.411633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.411924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.411934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.412120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.412132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.412490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.412500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.412787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.412797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.413119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.413130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.413434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.413444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.413728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.413739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.414049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.414059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.414353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.414363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.414626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.414636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.414945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.414955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.415243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.415253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.415583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.415593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.415813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.415823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.416201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.416212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.416521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.416530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.416811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.676 [2024-11-26 19:31:31.416821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.676 qpair failed and we were unable to recover it. 00:24:57.676 [2024-11-26 19:31:31.417104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.417115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.417459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.417468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.417753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.417763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.418049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.418060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.418374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.418385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.418690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.418700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.419002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.419012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.419321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.419331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.419649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.419659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.419949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.419959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.420338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.420348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.420711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.420720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.421003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.421013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.421328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.421339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.421643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.421653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.421853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.421863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.422147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.422158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.422468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.422477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.422761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.422771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.423066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.423076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.423373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.423383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.423709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.423719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.424010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.424019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.424309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.424320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.424501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.424512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.424734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.424744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.425023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.425033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.425343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.425352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.425650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.425660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.425940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.425949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.426140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.426150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.426438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.426447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.426764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.426773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.427060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.427070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.427352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.427362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.427671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.427682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.427965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.427975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.428273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.428283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.428594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.428603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.428896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.428906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.429261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.429271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.429556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.429566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.429840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.429850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.430121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.430132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.430447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.430457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.430770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.430780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.431067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.431077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.431352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.431362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.431650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.431659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.431962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.431972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.677 qpair failed and we were unable to recover it. 00:24:57.677 [2024-11-26 19:31:31.432320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.677 [2024-11-26 19:31:31.432330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.432616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.432625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.432949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.432959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.433152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.433162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.433481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.433490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.433787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.433796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.434086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.434095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.434302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.434312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.434624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.434634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.434918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.434927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.435104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.435114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.435387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.435397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.435604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.435616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.435960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.436360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.436370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.436714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.436724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.437013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.437023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.437371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.437382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.437668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.437678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.437866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.437877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.438165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.438175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.438511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.438520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.438812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.438821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.439104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.439114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.439444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.439454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.439748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.439757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.440140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.440151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.440490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.440499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.440815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.440825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.441116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.441126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.441429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.441439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.441730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.441739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.442122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.442133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.442378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.442388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.442685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.442695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.442981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.442990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.443350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.443360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.443653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.443662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.443994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.444004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.444302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.444313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.444580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.444589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.444890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.444899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.445204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.445214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.445551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.445561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.445804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.445814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.446111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.446121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.446418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.446427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.446710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.446719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.447034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.447044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.447330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.447340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.678 qpair failed and we were unable to recover it. 00:24:57.678 [2024-11-26 19:31:31.447667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.678 [2024-11-26 19:31:31.447676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.447968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.447978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.448318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.448328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.448667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.448677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.448991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.449000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.449294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.449304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.449472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.449482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.449808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.449817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.450110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.450120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.450422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.450433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.450717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.450726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.450901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.450911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.451185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.451195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.451490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.451499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.451783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.451793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.452136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.452146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.452449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.452458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.452745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.452754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.453060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.453071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.453372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.453382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.453653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.453663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.453992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.454002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.454313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.454323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.454630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.454640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.454923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.454933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.455224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.455234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.455531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.455541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.455722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.455732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.456002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.456012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.456344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.456354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.456534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.456546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.456929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.456939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.457290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.457301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.457599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.457608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.457781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.457791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.458064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.458073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.458278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.458288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.458606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.458616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.458927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.458937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.459248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.459258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.459548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.459557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.459849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.459858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.460204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.460214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.460521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.460531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.679 [2024-11-26 19:31:31.460843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.679 [2024-11-26 19:31:31.460854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.679 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.461157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.461167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.461459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.461469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.461756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.461766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.462087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.462097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.462407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.462417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.462738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.462748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.463084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.463094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.463416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.463426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.463746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.463756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.464040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.464050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.464358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.464369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.464716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.464726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.465057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.465069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.465369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.465379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.465684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.465694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.465986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.465996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.466175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.466186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.466502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.466512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.466845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.466855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.467171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.467182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.680 [2024-11-26 19:31:31.467485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.680 [2024-11-26 19:31:31.467495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.680 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.467727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.467737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.468059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.468070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.468378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.468388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.468595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.468605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.468887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.468898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.469195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.469206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.469534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.469544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.469892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.469902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.470226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.470236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.470594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.470604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.470890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.470899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.471218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.471228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.471506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.471516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.471834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.471843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.472156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.472167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.472460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.472470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.472755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.472765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.473089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.473099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.473377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.473393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.473711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.473721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.474067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.474076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.474354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.474364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.474669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.474678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.474990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.955 [2024-11-26 19:31:31.475001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.955 qpair failed and we were unable to recover it. 00:24:57.955 [2024-11-26 19:31:31.475298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.475308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.475589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.475599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.475910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.475920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.476217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.476227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.476539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.476549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.476853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.476863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.477181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.477192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.477550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.477560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.477853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.477863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.478034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.478044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.478207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.478217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.478523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.478533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.478844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.478854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.479190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.479200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.479502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.479511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.479792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.479802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.480123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.480134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.480408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.480418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.480710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.480719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.481006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.481016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.481375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.481386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.481700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.481710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.481998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.482008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.482177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.482188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.482507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.482516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.482684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.482694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.483020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.483030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.483354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.483364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.483675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.483684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.483978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.483987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.484320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.484330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.484702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.484711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.485106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.485116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.485410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.485419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.485696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.485706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.486014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.486024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.486271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.486281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.486597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.486607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.956 qpair failed and we were unable to recover it. 00:24:57.956 [2024-11-26 19:31:31.486893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.956 [2024-11-26 19:31:31.486902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.487223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.487233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.487561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.487571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.487736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.487747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.488092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.488105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.488404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.488414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.488773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.488783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.489157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.489168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.489481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.489490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.489698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.489708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.489992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.490002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.490286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.490297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.490472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.490482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.490758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.490768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.491093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.491106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.491394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.491404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.491697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.491707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.491998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.492008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.492305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.492315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.492636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.492646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.492804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.492814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.493122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.493132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.493370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.493379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.493565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.493574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.493850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.493862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.494166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.494176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.494476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.494486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.494778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.494787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.495095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.495108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.495404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.495413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.495694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.495703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.496009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.496327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.496337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.496618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.496628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.496918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.496928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.497136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.497146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.497426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.497435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.497822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.497832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.498127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.957 [2024-11-26 19:31:31.498137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.957 qpair failed and we were unable to recover it. 00:24:57.957 [2024-11-26 19:31:31.498449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.498459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.498742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.498752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.499039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.499049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.499372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.499382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.499665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.499674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.499955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.499964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.500221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.500231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.500554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.500563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.500879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.500889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.501082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.501091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.501426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.501436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.501717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.501727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.502064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.502076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.502356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.502367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.502643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.502653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.502929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.502940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.503250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.503260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.503456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.503465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.503674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.503683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.503996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.504005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.504408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.504418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.504728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.504738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.505030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.505040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.505328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.505339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.505630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.505640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.505940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.505950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.506233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.506244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.506545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.506555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.506831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.506841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.507152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.507162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.507344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.507354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.507655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.507666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.507975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.507985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.508196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.508206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.508498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.508508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.508836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.508845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.509123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.509133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.509427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.509436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.958 qpair failed and we were unable to recover it. 00:24:57.958 [2024-11-26 19:31:31.509717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.958 [2024-11-26 19:31:31.509728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.510046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.510056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.510372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.510382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.510682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.510692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.511033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.511043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.511348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.511358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.511639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.511648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.511966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.511976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.512156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.512166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.512435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.512445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.512622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.512632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.512954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.512963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.513192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.513202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.513472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.513481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.513680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.513690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.514053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.514062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.514326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.514336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.514629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.514639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.514928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.514937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.515092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.515106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.515274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.515284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.515585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.515594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.515875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.515885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.516215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.516225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.516498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.516508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.516801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.516811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.516981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.516991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.517273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.517283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.517599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.517609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.517947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.517956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.518240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.518250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.518561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.518570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.518868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.518878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.959 qpair failed and we were unable to recover it. 00:24:57.959 [2024-11-26 19:31:31.519169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.959 [2024-11-26 19:31:31.519179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.519465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.519475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.519759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.519769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.520066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.520075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.520375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.520385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.520678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.520688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.520973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.520983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.521346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.521357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.521646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.521656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.521970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.521981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.522255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.522265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.522566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.522576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.522921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.522930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.523245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.523255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.523543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.523552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.523846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.523855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.524203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.524214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.524521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.524531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.524817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.524827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.525112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.525122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.525422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.525431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.525609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.525619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.525887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.525896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.526185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.526195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.526505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.526515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.526839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.526848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.527138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.527148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.527429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.527438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.527761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.527770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.527967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.527977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.528306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.528316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.528628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.528638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.528812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.528822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.528992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.529003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.529338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.529349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.529669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.529679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.530007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.530019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.530319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.530329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.960 [2024-11-26 19:31:31.530675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.960 [2024-11-26 19:31:31.530685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.960 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.530962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.530972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.531255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.531265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.531634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.531644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.531962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.531971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.532266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.532276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.532564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.532574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.532881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.532890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.533263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.533274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.533603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.533613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.533896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.533906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.534231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.534241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.534536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.534546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.534707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.534716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.535028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.535038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.535307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.535318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.535614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.535624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.535919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.535928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.536269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.536280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.536620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.536631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.536911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.536922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.537202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.537213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.537407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.537418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.537749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.537760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.538053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.538063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.538363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.538376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.538648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.538658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.538936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.538946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.539244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.539255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.539541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.539551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.539853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.539863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.540185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.540196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.540481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.540491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.540777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.540787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.541094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.541108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.541435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.541444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.541808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.541818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.542018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.542028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.542313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.961 [2024-11-26 19:31:31.542324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.961 qpair failed and we were unable to recover it. 00:24:57.961 [2024-11-26 19:31:31.542679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.542689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.543014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.543023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.543327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.543337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.543633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.543643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.543945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.543955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.544196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.544207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.544594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.544604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.544896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.544906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.545277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.545288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.545569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.545579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.545857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.545867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.546150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.546161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.546446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.546456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.546771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.546781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.547123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.547134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.547302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.547312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.547663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.547672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.547864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.547874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.548215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.548225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.548544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.548554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.548841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.548851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.549147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.549157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.549462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.549472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.549764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.549774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.550092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.550108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.550382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.550391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.550705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.550715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.551004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.551014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.551322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.551332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.551616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.551625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.551951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.551961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.552277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.552287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.552574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.552583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.552793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.552803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.553084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.553093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.553363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.553374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.553677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.553686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.553985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.553995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.554291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.962 [2024-11-26 19:31:31.554301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.962 qpair failed and we were unable to recover it. 00:24:57.962 [2024-11-26 19:31:31.554602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.554612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.554932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.554942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.555268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.555278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.555464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.555474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.555799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.555808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.556062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.556071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.556285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.556295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.556629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.556639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.556922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.556932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.557168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.557178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.557498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.557508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.557791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.557800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.558186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.558196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.558608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.558618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.558813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.558823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.559097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.559113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.559329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.559339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.559753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.559763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.559961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.559971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.560243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.560253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.560593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.560602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.560887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.560897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.561251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.561262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.561548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.561558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.561832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.561842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.562126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.562136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.562475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.562485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.562816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.562826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.563135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.563146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.563476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.563486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.563804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.563814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.564161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.564172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.564492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.564502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.564844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.564853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.565147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.565157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.565467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.565476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.565772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.565781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.566084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.566093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.963 [2024-11-26 19:31:31.566467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.963 [2024-11-26 19:31:31.566477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.963 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.566682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.566691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.566976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.566985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.567280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.567290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.567648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.567659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.568001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.568010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.568225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.568235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.568550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.568560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.568883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.568893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.569178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.569188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.569474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.569483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.569791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.569800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.570086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.570096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.570384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.570394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.570702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.570712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.570989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.571000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.571319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.571329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.571543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.571552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.571852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.571862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.572250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.572260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.572539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.572548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.572847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.572857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.573137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.573147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.573347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.573357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.573658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.573668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.573844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.573854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.574148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.574158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.574481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.574491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.574814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.574824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.575113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.575123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.575419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.575429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.575601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.575611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.575772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.575783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.576120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.576131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.576463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.964 [2024-11-26 19:31:31.576472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.964 qpair failed and we were unable to recover it. 00:24:57.964 [2024-11-26 19:31:31.576799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.576809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.577128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.577138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.577441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.577450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.577751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.577761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.578056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.578065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.578370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.578380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.578675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.578684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.578971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.578980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.579271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.579281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.579615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.579625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.579911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.579920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.580256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.580267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.580551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.580561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.580899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.580908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.581080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.581090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.581438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.581447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.581730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.581740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.582078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.582088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.582380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.582390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.582585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.582595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.582946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.582956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.583258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.583268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.583583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.583592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.583875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.583885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.584167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.584178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.584506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.584515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.584685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.584695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.585006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.585015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.585291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.585301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.585612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.585621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.585916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.585925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.586231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.586241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.586531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.586540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.586868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.586878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.587173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.587183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.587510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.587519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.587802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.587812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.588089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.588105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.965 qpair failed and we were unable to recover it. 00:24:57.965 [2024-11-26 19:31:31.588414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.965 [2024-11-26 19:31:31.588424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.588706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.588716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.589000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.589010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.589276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.589287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.589559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.589569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.589900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.589909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.590200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.590210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.590510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.590520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.590875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.590885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.591191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.591201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.591483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.591493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.591798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.591808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.592113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.592123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.592412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.592422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.592707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.592717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.593036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.593045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.593366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.593376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.593533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.593542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.593795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.593804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.594149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.594159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.594471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.594481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.594767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.594776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.595082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.595092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.595436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.595446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.595651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.595660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.595969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.595979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.596273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.596288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.596587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.596597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.596912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.596922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.597131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.597142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.597473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.597482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.597781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.597791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.598078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.598088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.598386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.598396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.598672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.598682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.598978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.598988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.599341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.599350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.599693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.599703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.599978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.966 [2024-11-26 19:31:31.599987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.966 qpair failed and we were unable to recover it. 00:24:57.966 [2024-11-26 19:31:31.600292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.600302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.600481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.600491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.600763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.600773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.601083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.601093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.601437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.601447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.601805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.601815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.602156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.602166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.602480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.602490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.602768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.602777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.603060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.603070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.603361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.603371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.603668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.603678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.603846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.603857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.604129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.604139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.604439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.604450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.604731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.604741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.605027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.605037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.605360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.605370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.605641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.605650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.605856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.605866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.606208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.606218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.606552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.606562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.606845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.606855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.607139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.607149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.607492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.607502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.607858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.607868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.608187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.608197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.608472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.608482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.608789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.608799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.609084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.609093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.609408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.609417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.609696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.609705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.609885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.609895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.610267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.610277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.610450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.610460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.610749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.610759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.611091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.611104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.611389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.611399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.967 qpair failed and we were unable to recover it. 00:24:57.967 [2024-11-26 19:31:31.611717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.967 [2024-11-26 19:31:31.611727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.612058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.612068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.612376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.612387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.612736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.612746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.613046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.613056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.613353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.613363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.613661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.613671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.614019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.614029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.614334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.614344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.614507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.614516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.614785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.614794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.615031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.615041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.615328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.615338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.615697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.615706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.616007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.616016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.616399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.616409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.616579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.616589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.616881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.616890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.617179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.617190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.617458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.617467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.617751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.617760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.618049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.618059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.618354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.618364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.618681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.618691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.618987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.618996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.619270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.619280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.619570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.619579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.619951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.619961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.620142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.620152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.620434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.620444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.620781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.620791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.621120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.621131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.621502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.621512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.621814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.621824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.622164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.622174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.622343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.622353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.622624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.968 [2024-11-26 19:31:31.622633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.968 qpair failed and we were unable to recover it. 00:24:57.968 [2024-11-26 19:31:31.622949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.622959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.623251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.623261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.623581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.623590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.623873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.623883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.624150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.624160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.624497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.624507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.624821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.624831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.625112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.625124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.625426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.625436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.625752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.625762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.626062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.626072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.626355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.626365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.626713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.626722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.627001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.627010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.627371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.627381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.627698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.627708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.627985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.627995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.628102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.628112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.628461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.628470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.628769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.628779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.629090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.629102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.629436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.629446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.629780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.629790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.630074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.630084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.630380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.630390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.630591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.630601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.630881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.630891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.631148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.631159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.631471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.631480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.631776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.631786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.632115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.632125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.632436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.632446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.632731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.632741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.633034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.633044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.633349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.633362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.633667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.633677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.969 [2024-11-26 19:31:31.633958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.969 [2024-11-26 19:31:31.633967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.969 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.634263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.634274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.634602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.634612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.634961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.634970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.635252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.635262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.635608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.635617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.635900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.635909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.636204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.636215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.636515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.636525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.636819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.636829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.637116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.637127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.637297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.637307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.637575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.637585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.637873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.637883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.638193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.638203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.638492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.638501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.638851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.638861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.639137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.639147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.639456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.639466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.639769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.639779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.640064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.640074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.640436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.640447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.640785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.640795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.641082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.641091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.641419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.641429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.641707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.641717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.641996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.642006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.642286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.642296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.642573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.642583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.642754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.642763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.643044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.643054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.643411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.643421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.643726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.643736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.644027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.644036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.644353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.644363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.644649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.644658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.644853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.644862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.645157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.645168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.645473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.970 [2024-11-26 19:31:31.645483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.970 qpair failed and we were unable to recover it. 00:24:57.970 [2024-11-26 19:31:31.645791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.645801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.646083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.646093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.646261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.646270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.646554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.646564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.646886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.646896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.647198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.647208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.647530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.647539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.647756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.647765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.648041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.648051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.648368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.648378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.648583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.648593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.648912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.648922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.649239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.649250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.649440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.649450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.649742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.650063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.650073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.650357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.650367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.650581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.650591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.650896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.650906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.651239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.651250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.651478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.651488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.651803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.651813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.652129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.652140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.652529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.652539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.652829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.652838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.653120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.653130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.653403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.653412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.653739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.653751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.654030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.654040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.654336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.654346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.654632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.654641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.654962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.654972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.655270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.655281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.655655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.655665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.655961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.655970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.656255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.656266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.656568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.656578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.656874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.656884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.971 [2024-11-26 19:31:31.657254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.971 [2024-11-26 19:31:31.657265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.971 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.657572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.657581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.657866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.657876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.658225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.658235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.658458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.658468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.658748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.658759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.659072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.659082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.659394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.659405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.659739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.659749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.660037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.660048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.660239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.660249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.660629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.660640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.660976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.660985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.661153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.661164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.661515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.661526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.661703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.661713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.662000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.662013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.662219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.662230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.662533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.662544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.662848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.662858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.663139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.663149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.663464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.663475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.663763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.663773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.663965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.663975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.664286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.664296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.664498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.664508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.664849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.664858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.665140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.665151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.665446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.665455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.665737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.665747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.666037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.666047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.666358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.666368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.666533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.666543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.666943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.666953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.667244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.667255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.667461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.667471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.667746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.667756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.668079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.668089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.668291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.972 [2024-11-26 19:31:31.668302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.972 qpair failed and we were unable to recover it. 00:24:57.972 [2024-11-26 19:31:31.668592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.668602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.668913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.668923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.669230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.669240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.669539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.669549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.669860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.669872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.670150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.670161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.670477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.670487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.670769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.670778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.670952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.670963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.671289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.671300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.671607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.671617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.671912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.671922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.672239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.672250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.672581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.672591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.672884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.673167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.673178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.673469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.673479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.673778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.673788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.674072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.674083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.674258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.674269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.674573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.674583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.674915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.674925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.675223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.675234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.675528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.675537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.675827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.675837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.676119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.676130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.676440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.676450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.676782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.676791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.677072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.677082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.677407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.677418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.677715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.677725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.678024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.678034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.678367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.678378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.678674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.973 [2024-11-26 19:31:31.678684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.973 qpair failed and we were unable to recover it. 00:24:57.973 [2024-11-26 19:31:31.678966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.678975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.679353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.679364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.679642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.679652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.679941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.679952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.680236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.680247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.680597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.680607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.680905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.680916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.681228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.681238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.681549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.681559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.681863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.681873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.682154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.682164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.682452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.682464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.682750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.682760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.683046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.683056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.683354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.683365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.683719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.683729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.684064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.684074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.684393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.684403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.684740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.684750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.684924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.684934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.685244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.685255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.685563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.685574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.685879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.685889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.686179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.686189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.686533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.686543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.686897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.686907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.687221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.687231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.687517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.687527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.687811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.687821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.688104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.688114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.688461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.688470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.688752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.688762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.689064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.689074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.689391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.689401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.689704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.689714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.690021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.690030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.690319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.690329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.690626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.690635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.974 qpair failed and we were unable to recover it. 00:24:57.974 [2024-11-26 19:31:31.690968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.974 [2024-11-26 19:31:31.690980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.691272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.691282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.691564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.691573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.691920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.691929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.692133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.692143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.692332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.692341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.692692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.692702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.693000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.693009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.693200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.693210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.693538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.693547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.693838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.693848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.694022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.694032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.694315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.694325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.694672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.694682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.694969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.694979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.695274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.695285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.695585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.695595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.695896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.695905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.696232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.696242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.696520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.696530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.696920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.696929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.697280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.697290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.697625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.697635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.697935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.697945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.698292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.698303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.698582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.698591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.698909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.698919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.699255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.699267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.699560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.699570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.699739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.699750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.699942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.699951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.700290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.700300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.700581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.700591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.700874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.700883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.701168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.701178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.701475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.701486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.701774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.701784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.702097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.702111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.702397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.975 [2024-11-26 19:31:31.702407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.975 qpair failed and we were unable to recover it. 00:24:57.975 [2024-11-26 19:31:31.702740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.702749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.703068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.703078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.703402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.703413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.703693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.703702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.703987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.703997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.704306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.704316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.704619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.704629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.704932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.704942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.705237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.705247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.705562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.705571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.705856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.705866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.706146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.706156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.706503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.706512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.706826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.706836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.707032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.707041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.707356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.707366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.707644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.707654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.707959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.707969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.708215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.708225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.708545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.708555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.708859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.708868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.709150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.709168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.709483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.709493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.709796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.709806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.710114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.710125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.710446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.710456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.710737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.710747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.711053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.711063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.711356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.711366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.711648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.711658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.711969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.711978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.712278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.712289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.712593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.712603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.712915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.712925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.713129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.713139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.713455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.713465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.713766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.713775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.714069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.714079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.976 [2024-11-26 19:31:31.714426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.976 [2024-11-26 19:31:31.714437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.976 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.714707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.714717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.714890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.714900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.715173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.715183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.715497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.715507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.715807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.715817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.716123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.716133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.716296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.716306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.716672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.716682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.717004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.717013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.717293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.717303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.717595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.717604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.717898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.717907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.718188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.718199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.718537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.718546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.718827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.718837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.719166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.719177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.719373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.719382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.719658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.719672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.719857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.719868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.720084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.720093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.720408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.720418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.720720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.720730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.721019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.721030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.721344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.721354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.721640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.721649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.721943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.721953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.722274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.722284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.722603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.722612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.722897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.722914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.723258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.723268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.723552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.723561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.723855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.723866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.724137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.724148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.724450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.724460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.724763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.724772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.725059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.725068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.725435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.725447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.725778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.725788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.977 qpair failed and we were unable to recover it. 00:24:57.977 [2024-11-26 19:31:31.726141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.977 [2024-11-26 19:31:31.726151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.726435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.726444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.726723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.726733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.727063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.727073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.727272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.727282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.727445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.727456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.727774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.727786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.728090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.728103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.728417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.728427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.728751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.728761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.729068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.729078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.729383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.729393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.729716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.729725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.730057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.730066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.730367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.730377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.730681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.730691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.730985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.730995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.731278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.731288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.731570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.731579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.731901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.731910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.732205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.732215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.732512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.732522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.732800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.732809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.733124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.733134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.733445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.733455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.733745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.733755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.734054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.734064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.734419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.734430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.734777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.734787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.735074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.735083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.735374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.735385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.735738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.735748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.736036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.736046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.736329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.736342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.978 [2024-11-26 19:31:31.736672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.978 [2024-11-26 19:31:31.736682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.978 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.736993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.737002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.737361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.737371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.737544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.737554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.737761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.737771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.738067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.738076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.738402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.738412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.738691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.738701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.738989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.738999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.739280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.739291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.739678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.739687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.739981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.739991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.740272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.740283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.740453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.740464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.740788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.740798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.740980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.740991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.741300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.741310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.741601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.741611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.741937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.741946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.742250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.742262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.742616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.742626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.742908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.742919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.743206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.743216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.743510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.743520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.743823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.743832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.744113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.744123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.744424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.744434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.744590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.744600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.744928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.744937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.745249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.745260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.745548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.745557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.745850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.745860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.746177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.746187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.746367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.746376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.746792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.746802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.747106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.747116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.747404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.747413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.747698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.747709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.747988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.979 [2024-11-26 19:31:31.747998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.979 qpair failed and we were unable to recover it. 00:24:57.979 [2024-11-26 19:31:31.748335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.748345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.748662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.748674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.748962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.748971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.749337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.749347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.749631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.749640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.749955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.749964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.750251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.750261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.750557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.750567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.750755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.750766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.751083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.751093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.751423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.751433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.751716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.751726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.752058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.752068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.752411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.752421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.752704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.752713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.753004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.753014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.753288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.753299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.753616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.753625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.753950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.753960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.754248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.754260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.754462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.754473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.754773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.754783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.755097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.755111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.755304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.755316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.755604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.755615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.755912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.755922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.756133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.756145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.756485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.756494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.756800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.756812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.757096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.757112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.757443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.757453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.757761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.757772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.758044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.758053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.758344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.758354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.758646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.758655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.758951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.758961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.759183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.759194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.759495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.759505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.980 [2024-11-26 19:31:31.759793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.980 [2024-11-26 19:31:31.759803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.980 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.760098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.760112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.760449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.760458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.760746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.760756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.761095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.761108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.761409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.761418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.761703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.761712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.762001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.762011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.762370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.762381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.762564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.762574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.762874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.762884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.763185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.763195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.763514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.763524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.763818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.763828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.764139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.764149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.764465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.764475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.764659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.764668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.764980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.764992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.765292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.765302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.765597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.765607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.765917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.765926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.766222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.766232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.766513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.766523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.766845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.766855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.767139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.767150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.767364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.767373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.767675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.767685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.767965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.767974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.768269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.768279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.768578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.768587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.768761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.768771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.769055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.769065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.769378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.769388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.769708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.769717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.770008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.770017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.770305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.770315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.770597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.770606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.770927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.770937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.771243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.981 [2024-11-26 19:31:31.771253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.981 qpair failed and we were unable to recover it. 00:24:57.981 [2024-11-26 19:31:31.771542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.771552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.771904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.771914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.772200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.772211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.772387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.772397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.772735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.772744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.773057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.773066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.773411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.773421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.773703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.773712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.773905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.773916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.774188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.774198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.774550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.774559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.774846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.774855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.775132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.775143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.775449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.775458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.775737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.775746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.776033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.776043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.776351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.776718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.776727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.777000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.777009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.777219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.777229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.777552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.777562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.777894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.777904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.778178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.778189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.778494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.778503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.778670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.778681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.779001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.779011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.779325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.779335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.779648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.779657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.779991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.780000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.780353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.780363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.780668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.780678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.780971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.780981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.781267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.781278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.781459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.781469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.982 qpair failed and we were unable to recover it. 00:24:57.982 [2024-11-26 19:31:31.781799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.982 [2024-11-26 19:31:31.781809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.782109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.782119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.782416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.782425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.782721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.782730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.783028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.783037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.783351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.783361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.783727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.783736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.784032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.784042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.784358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.784368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.784560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.784570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.784871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.784882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.785193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.785203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.785530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.785541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.785822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.785832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.786133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.786143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.786497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.786506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.786793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.786802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.787082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.787092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.787396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.787406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.787626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.787636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.787918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.787928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.788220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.788230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.788520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.788529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.788736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.788746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.789059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.789068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.789379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.789389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.789676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.789686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.789995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.790005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.790228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.790238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.790534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.790544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.790740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.790752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.791044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.791053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.791363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.791373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.791656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.791665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.791955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.791965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.792254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.792264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.792433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.792444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.792790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.792800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.793089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.983 [2024-11-26 19:31:31.793103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.983 qpair failed and we were unable to recover it. 00:24:57.983 [2024-11-26 19:31:31.793385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.793397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.793784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.793793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.794084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.794093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.794413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.794424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.794735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.794744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.794951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.794960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.795276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.795287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.795656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.795666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.795967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.795976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.796258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.796268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.796572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.796582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.796879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.796889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.797170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.797180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.797388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.797398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.797675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.797685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.797999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.798009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.798329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.798340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.798676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.798686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.798979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.798988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.799332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.799342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.799664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.799674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.799961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.799971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.800279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.800289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.800574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.800583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.800775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.800784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.801128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.801138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.801341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.801351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.801680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.801691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.802044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.802053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.802261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.802272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.802565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.802576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.802898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.802908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.803197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.803207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.803511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.803520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:57.984 [2024-11-26 19:31:31.803850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.984 [2024-11-26 19:31:31.803860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:57.984 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.804198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.804209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.804519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.804530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.804822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.804832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.805133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.805143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.805477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.805487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.805801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.805811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.806097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.806112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.806453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.806462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.806743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.806752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.807046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.807056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.807403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.807413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.807694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.807703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.808001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.808011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.808318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.808328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.808617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.808627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.808930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.808940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.809269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.809279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.809581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.809591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.809875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.809884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.810058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.810068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.810372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.810382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.810721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.810731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.811028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.811037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.811211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.811222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.811525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.811534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.811847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.259 [2024-11-26 19:31:31.811857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.259 qpair failed and we were unable to recover it. 00:24:58.259 [2024-11-26 19:31:31.812165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.812175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.812502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.812512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.812801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.812811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.813102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.813112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.813446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.813456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.813757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.813767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.814048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.814058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.814253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.814265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.814586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.814595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.814877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.814886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.815052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.815062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.815394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.815404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.815684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.815694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.816055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.816064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.816367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.816378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.816669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.816679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.816998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.817007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.817324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.817334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.817619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.817629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.817918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.817928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.818238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.818248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.818579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.818589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.818875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.818885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.819166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.819176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.819468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.819478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.819764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.819773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.819964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.819974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.820293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.820303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.820593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.820602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.820965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.820974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.821176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.821186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.821394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.821403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.821679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.821689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.821945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.821954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.822239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.822251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.822454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.822463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.822769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.822779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.823075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.260 [2024-11-26 19:31:31.823085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.260 qpair failed and we were unable to recover it. 00:24:58.260 [2024-11-26 19:31:31.823258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.823269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.823566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.823576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.823860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.823870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.824172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.824182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.824480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.824490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.824771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.824781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.825107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.825117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.825473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.825482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.825803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.825813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.826099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.826112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.826426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.826436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.826735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.826745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.827030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.827039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.827230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.827240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.827576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.827586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.827908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.827918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.828106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.828116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.828415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.828425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.828708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.828718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.829028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.829038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.829327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.829337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.829686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.829696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.829975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.829985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.830237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.830249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.830544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.830554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.830775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.830785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.831096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.831110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.831392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.831402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.831689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.831698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.831971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.831981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.832228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.832238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.832444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.832454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.832777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.832787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.833120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.833130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.833465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.833475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.833782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.833791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.834121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.834131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.834438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.261 [2024-11-26 19:31:31.834447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.261 qpair failed and we were unable to recover it. 00:24:58.261 [2024-11-26 19:31:31.834758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.834767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.835055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.835065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.835370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.835380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.835589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.835600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.835922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.835932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.836119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.836129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.836395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.836406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.836737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.836746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.837040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.837050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.837375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.837385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.837707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.837717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.838014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.838024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.838366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.838376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.838651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.838663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.838954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.838965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.839311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.839321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.839593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.839602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.839893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.839903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.840180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.840191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.840364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.840375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.840611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.840621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.840905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.840914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.841247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.841257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.841594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.841604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.841887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.841897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.842190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.842201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.842548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.842558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.842883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.842892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.843178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.843189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.843523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.843532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.843841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.843851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.844177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.844188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.844555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.844565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.844876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.844885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.845422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.845442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.845699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.845710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.846053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.846063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.262 [2024-11-26 19:31:31.846410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.262 [2024-11-26 19:31:31.846421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.262 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.846709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.846721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.847009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.847019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.847334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.847346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.847678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.847688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.848033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.848044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.848206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.848218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.848573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.848582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.848868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.848879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.849179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.849190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.849549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.849559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.849845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.849856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.850162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.850174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.850479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.850492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.850839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.850850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.851114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.851124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.851453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.851466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.851749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.851758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.852034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.852044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.852380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.852390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.852672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.852681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.852969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.852979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.853287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.853297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.853611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.853621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.853920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.853930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.854131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.854143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.854437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.854446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.854656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.854665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.854944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.854954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.855330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.855340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.855626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.855636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.855965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.855975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.856265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.856275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.263 [2024-11-26 19:31:31.856617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.263 [2024-11-26 19:31:31.856627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.263 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.856908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.856917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.857244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.857255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.857574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.857583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.857866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.857875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.858238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.858248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.858579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.858589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.858902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.858911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.859200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.859210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.859484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.859494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.859770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.859782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.860055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.860065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.860406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.860416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.860702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.860711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.860998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.861008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.861206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.861218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.861530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.861540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.861831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.861840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.862163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.862174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.862475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.862484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.862561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.862571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.862910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.862920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.863218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.863228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.863511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.863521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.863814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.863824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.864139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.864150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.864431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.864440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.864706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.864716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 [2024-11-26 19:31:31.864827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.264 [2024-11-26 19:31:31.864837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.264 qpair failed and we were unable to recover it. 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Read completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.264 Write completed with error (sct=0, sc=8) 00:24:58.264 starting I/O failed 00:24:58.265 Read completed with error (sct=0, sc=8) 00:24:58.265 starting I/O failed 00:24:58.265 [2024-11-26 19:31:31.865047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.265 [2024-11-26 19:31:31.865463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.865498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.865669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.865681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.865965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.865974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.866350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.866358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.866515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.866522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.866902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.866908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.867230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.867238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.867568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.867575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.867891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.867898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.868204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.868211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.868511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.868519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.868878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.868885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.869192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.869199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.869524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.869531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.869736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.869743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.870045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.870053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.870418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.870425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.870715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.870722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.871043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.871050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.871268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.871275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.871585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.871592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.871784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.871791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.872151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.872158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.872484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.872491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.872777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.872783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.873117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.873124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.873334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.873341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.873628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.873635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.873926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.873933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.874220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.874227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.874544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.874551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.874844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.874851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.875178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.875185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.875325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.875331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.875594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.875601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.875902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.875909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.265 qpair failed and we were unable to recover it. 00:24:58.265 [2024-11-26 19:31:31.876241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.265 [2024-11-26 19:31:31.876249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.876549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.876856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.876863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.877169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.877176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.877501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.877508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.877835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.877844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.878131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.878138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.878437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.878444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.878813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.878820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.879001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.879008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.879299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.879306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.879634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.879641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.879958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.879965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.880255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.880262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.880629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.880636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.880923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.880929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.881234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.881241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.881595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.881602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.881891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.881898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.882230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.882237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.882577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.882584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.882882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.882889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.883201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.883208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.883523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.883530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.883817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.883824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.884155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.884161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.884468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.884475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.884681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.884689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.884971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.884978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.885292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.885299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.885600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.885607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.885937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.885943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.886238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.886245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.886541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.886548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.886829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.886836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.887146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.887153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.887439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.887446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.887736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.266 [2024-11-26 19:31:31.887743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.266 qpair failed and we were unable to recover it. 00:24:58.266 [2024-11-26 19:31:31.888123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.888130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.888511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.888518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.888861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.888868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.889159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.889168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.889468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.889475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.889758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.889765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.890058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.890064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.890370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.890377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.890690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.890697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.890982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.890989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.891276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.891284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.891580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.891587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.891880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.891887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.892177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.892184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.892541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.892548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.892850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.892857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.893028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.893036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.893326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.893334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.893588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.893594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.893773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.893779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.893975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.893982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.894268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.894275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.894651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.894658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.894853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.894860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.895148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.895155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.895464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.895471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.895684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.895691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.896022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.896028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.896384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.896391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.896687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.896694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.896987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.896993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.897297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.897304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.897608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.897615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.897907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.897914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.898257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.898268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.898555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.898562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.898851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.898858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.899040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.267 [2024-11-26 19:31:31.899047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.267 qpair failed and we were unable to recover it. 00:24:58.267 [2024-11-26 19:31:31.899327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.899334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.899663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.899670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.899955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.899961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.900277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.900284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.900570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.900577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.900746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.900754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.901088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.901095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.901396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.901403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.901576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.901583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.901910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.901918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.902233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.902240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.902605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.902612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.902967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.902975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.903317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.903325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.903632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.903639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.903944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.903952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.904287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.904295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.904630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.904637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.904951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.904958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.905267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.905274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.905609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.905616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.905921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.905929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.906229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.906236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.906464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.906472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.906789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.906796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.907006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.907014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.907472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.907480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.907821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.907829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.908137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.908145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.908423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.908430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.908741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.908749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.909084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.909092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.909398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.909405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.909684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.268 [2024-11-26 19:31:31.909692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.268 qpair failed and we were unable to recover it. 00:24:58.268 [2024-11-26 19:31:31.910004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.910011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.910373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.910381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.910671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.910680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.910970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.910977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.911275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.911283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.911575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.911582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.911868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.911876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.912079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.912086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.912248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.912256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.912543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.912550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.912844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.912852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.913162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.913169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.913490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.913497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.913804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.913811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.914090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.914097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.914428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.914435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.914720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.914727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.915071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.915078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.915347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.915354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.915659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.915666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.915983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.915990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.916355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.916363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.916755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.916761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.917047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.917054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.917387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.917395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.917682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.917689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.917900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.917909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.918213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.918220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.918519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.918526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.918815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.918822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.919113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.919120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.919433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.919441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.919740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.919747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.919915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.919923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.920225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.920232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.920535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.920542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.920849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.920857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.921189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.269 [2024-11-26 19:31:31.921196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.269 qpair failed and we were unable to recover it. 00:24:58.269 [2024-11-26 19:31:31.921494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.921501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.921800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.921807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.922101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.922108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.922313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.922320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.922608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.922617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.922899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.922906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.923212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.923219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.923573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.923580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.923891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.923897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.924262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.924269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.924565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.924573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.924898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.924905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.925200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.925206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.925420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.925427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.925703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.925709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.925984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.925991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.926303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.926310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.926648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.926656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.926892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.926898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.927222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.927230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.927525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.927533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.927711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.927718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.927906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.927913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.928189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.928197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.928495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.928502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.928793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.928800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.929084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.929092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.929389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.929396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.929681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.929688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.270 qpair failed and we were unable to recover it. 00:24:58.270 [2024-11-26 19:31:31.929998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.270 [2024-11-26 19:31:31.930005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.930343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.930350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.930653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.930660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.931075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.931082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.931390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.931397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.931685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.931692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.931942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.931949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.932257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.932264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.932626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.932633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.932828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.932836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.933148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.933155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.933430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.933437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.933770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.933777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.934103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.934110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.934455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.934462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.934743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.934752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.935097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.935110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.935448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.935455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.935640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.935647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.935909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.935917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.936091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.936098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.936375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.936382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.936692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.936699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.936995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.937002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.937314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.937321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.937619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.937626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.937909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.937916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.938116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.938123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.938452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.938460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.271 qpair failed and we were unable to recover it. 00:24:58.271 [2024-11-26 19:31:31.938751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.271 [2024-11-26 19:31:31.938758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.939048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.939055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.939360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.939368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.939692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.939699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.940069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.940076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.940378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.940385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.940671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.940678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.940970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.940977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.941321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.941329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.941614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.941621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.941909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.941916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.942210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.942217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.942541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.942548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.942751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.942758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.943038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.943045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.943340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.943348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.943648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.943655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.943935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.943942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.944229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.944236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.944528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.944535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.944829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.944836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.945120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.945127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.945426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.945433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.945729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.945736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.946046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.946054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.946435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.946442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.946745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.946753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.947074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.947082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.947460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.947467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.947761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.947769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.948055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.948062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.948363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.948370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.948655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.948661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.949010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.949017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.949314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.949322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.949657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.272 [2024-11-26 19:31:31.949663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.272 qpair failed and we were unable to recover it. 00:24:58.272 [2024-11-26 19:31:31.949949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.949956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.950263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.950271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.950648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.950655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.950994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.951001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.951319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.951326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.951527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.951533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.951856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.951863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.952161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.952168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.952462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.952468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.952785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.952791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.953108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.953115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.953387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.953394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.953713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.953720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.954022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.954028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.954233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.954240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.954508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.954515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.954753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.954760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.955080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.955087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.955405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.955412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.955711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.955718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.956013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.956020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.956325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.956332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.956647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.956654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.956954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.956961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.957253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.957260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.957564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.957571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.957740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.957747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.958029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.958036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.958451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.958458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.958756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.958762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.959050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.959060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.959440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.959447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.959766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.273 [2024-11-26 19:31:31.959773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.273 qpair failed and we were unable to recover it. 00:24:58.273 [2024-11-26 19:31:31.960117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.960123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.960457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.960464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.960776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.960782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.960934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.960940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.961175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.961183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.961395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.961401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.961704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.961710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.962058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.962065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.962363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.962370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.962654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.962660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.962940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.962946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.963329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.963336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.963639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.963646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.963965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.963971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.964258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.964265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.964559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.964565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.964867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.964874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.965198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.965205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.965551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.965558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.965881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.965887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.966178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.966185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.966482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.966489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.966789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.966796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.967091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.967098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.967432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.967439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.967733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.967739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.968027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.968033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.968228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.968236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.968492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.968499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.968829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.968835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.969125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.969132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.969478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.969484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.969772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.969779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.970049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.970055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.970342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.970349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.970708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.970715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.971002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.971009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.274 [2024-11-26 19:31:31.971326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.274 [2024-11-26 19:31:31.971335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.274 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.971505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.971512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.971860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.971867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.972202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.972208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.972418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.972425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.972703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.972709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.973023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.973029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.973382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.973389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.973776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.973782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.974069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.974076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.974371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.974378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.974545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.974552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.974847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.974854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.975138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.975145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.975458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.975465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.975813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.975819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.976164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.976171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.976451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.976458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.976757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.976764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.977045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.977051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.977355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.977362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.977649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.977656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.977954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.977961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.978158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.978164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.978442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.978448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.978755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.978761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.979052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.979059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.979367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.979374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.979659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.979666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.979944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.979951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.980258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.980265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.980603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.980609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.980932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.980938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.981256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.981263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.981563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.981570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.981859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.981866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.982155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.982162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.982531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.275 [2024-11-26 19:31:31.982538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.275 qpair failed and we were unable to recover it. 00:24:58.275 [2024-11-26 19:31:31.982920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.982927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.983223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.983230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.983562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.983571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.983897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.983904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.984192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.984199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.984504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.984511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.984796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.984803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.985089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.985096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.985405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.985411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.985698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.985704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.986037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.986044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.986374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.986382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.986668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.986675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.986960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.986967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.987357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.987364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.987652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.987658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.987955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.987962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.988271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.988278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.988465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.988472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.988794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.988801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.989083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.989089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.989466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.989473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.989800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.989806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.990096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.990104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.990388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.990394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.990588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.990595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.990865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.990872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.991178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.991186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.991486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.991492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.991771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.991778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.992130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.992137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.992432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.992439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.992754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.992761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.993097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.993107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.993403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.993410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.993562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.993569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.993853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.993860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.276 [2024-11-26 19:31:31.994139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.276 [2024-11-26 19:31:31.994146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.276 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.994336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.994342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.994611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.994617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.994902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.994909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.995203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.995210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.995529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.995538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.995920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.995926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.996223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.996230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.996407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.996414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.996725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.996732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.997017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.997023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.997317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.997324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.997613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.997620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.997934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.997941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.998252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.998259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.998545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.998552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.998844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.998850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.999045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.999052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.999364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.999371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.999657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.999664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:31.999986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:31.999993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.000344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.000351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.000679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.000685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.000970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.000976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.001134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.001141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.001435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.001442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.001777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.001784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.002069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.002076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.002370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.002378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.002680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.002687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.002872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.002879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.003161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.003168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.003476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.003483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.003788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.003795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.004119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.004126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.004421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.004428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.004722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.004729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.004902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.004909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.277 [2024-11-26 19:31:32.005195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.277 [2024-11-26 19:31:32.005203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.277 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.005546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.005553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.005852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.005859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.006167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.006174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.006456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.006462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.006757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.006763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.007053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.007059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.007400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.007409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.007691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.007698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.008023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.008029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.008328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.008335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.008644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.008651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.008958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.008965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.009263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.009270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.009598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.009605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.009922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.009928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.010213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.010220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.010512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.010519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.010736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.010743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.010919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.010926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.011252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.011260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.011527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.011533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.011749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.011756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.012009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.012016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.012335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.012342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.012662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.012669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.012849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.012856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.013189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.013197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.013503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.013510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.013800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.013807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.014113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.014120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.014472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.278 [2024-11-26 19:31:32.014478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.278 qpair failed and we were unable to recover it. 00:24:58.278 [2024-11-26 19:31:32.014768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.014774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.015096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.015109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.015268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.015275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.015554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.015560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.015855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.015862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.016147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.016155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.016347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.016354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.016533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.016539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.016833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.016839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.017136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.017143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.017471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.017478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.017771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.017778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.018089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.018096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.018474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.018481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.018804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.018811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.019111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.019121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.019416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.019423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.019747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.019754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.020040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.020046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.020389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.020396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.020691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.020698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.021019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.021026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.021294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.021301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.021492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.021500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.021700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.021707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.022012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.022018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.022213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.022221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.022510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.022516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.022829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.022836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.023132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.023139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.023452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.023458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.023743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.023749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.024040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.024047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.279 [2024-11-26 19:31:32.024330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.279 [2024-11-26 19:31:32.024336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.279 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.024632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.024639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.024946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.024952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.025251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.025259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.025557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.025565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.025873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.025880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.026212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.026219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.026619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.026625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.026926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.026933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.027109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.027117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.027379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.027386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.027670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.027677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.027990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.027997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.028347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.028354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.028658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.028665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.028964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.028971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.029274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.029282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.029604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.029611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.029780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.029788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.030123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.030131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.030450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.030456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.030774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.030781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.030954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.030963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.031278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.031285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.031569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.031576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.031770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.031777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.032047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.032055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.032365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.032372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.032557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.032564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.032867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.032873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.033201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.033208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.033548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.033555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.033844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.033851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.034137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.034144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.034329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.034336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.034641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.034647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.034966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.034973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.035294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.280 [2024-11-26 19:31:32.035302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.280 qpair failed and we were unable to recover it. 00:24:58.280 [2024-11-26 19:31:32.035483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.035490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.035799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.035806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.036127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.036134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.036438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.036445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.036759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.036766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.037048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.037055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.037251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.037259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.037520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.037527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.037824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.037831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.038139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.038146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.038334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.038342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.038672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.038678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.038964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.038971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.039170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.039178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.039463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.039469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.039762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.039769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.040060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.040067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.040431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.040439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.040767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.040774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.041055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.041062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.041382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.041389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.041777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.041783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.042083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.042090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.042452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.042459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.042770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.042777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.043107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.043114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.043387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.043394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.043680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.043687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.043980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.043986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.044189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.044196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.044531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.044537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.044848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.044855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.045046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.045053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.045362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.045369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.045677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.045684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.045973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.045979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.046270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.046277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.281 [2024-11-26 19:31:32.046569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.281 [2024-11-26 19:31:32.046576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.281 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.046881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.046888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.047253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.047260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.047556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.047563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.047847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.047854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.048148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.048155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.048455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.048462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.048751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.048757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.049060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.049066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.049365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.049372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.049731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.049737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.050061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.050067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.050388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.050394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.050688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.050694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.050988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.050997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.051367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.051374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.051675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.051682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.051987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.051993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.052352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.052359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.052674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.052681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.052966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.052973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.053266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.053273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.053447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.053454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.053746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.053753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.054049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.054056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.054345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.054352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.054625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.054632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.054926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.054933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.055249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.055256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.055585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.055592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.055917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.055923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.056211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.056218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.056562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.056569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.056862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.056868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.282 qpair failed and we were unable to recover it. 00:24:58.282 [2024-11-26 19:31:32.057061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.282 [2024-11-26 19:31:32.057068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.057279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.057286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.057545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.057552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.057879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.057886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.058030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.058037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.058352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.058359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.058655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.058661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.058994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.059001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.059283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.059291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.059613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.059619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.059907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.059914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.060208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.060215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.060526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.060533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.060837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.060843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.061161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.061168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.061468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.061475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.061777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.061784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.061929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.061935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.062332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.062339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.062648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.062655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.062954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.062962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.063300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.063307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.063607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.063614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.063950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.063957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.064274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.064281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.064661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.064668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.064954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.064960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.065246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.065254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.065597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.065605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.065915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.065922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.066230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.066237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.066563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.066570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.066912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.066919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.067221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.067228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.067525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.067532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.067740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.067746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.067930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.067936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.068180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.068188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.283 qpair failed and we were unable to recover it. 00:24:58.283 [2024-11-26 19:31:32.068377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.283 [2024-11-26 19:31:32.068383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.068686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.068693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.068994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.069001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.069359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.069366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.069651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.069658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.070019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.070026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.070235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.070242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.070563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.070570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.070863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.070869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.071158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.071165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.071480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.071487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.071774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.071781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.072063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.072070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.072375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.072382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.072676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.072682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.072978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.072985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.073185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.073193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.073515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.073522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.073809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.073815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.074108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.074115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.074395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.074401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.074703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.074710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.074992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.075000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.075345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.075352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.075639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.075646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.075809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.075816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.076108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.076115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.076394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.076401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.076718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.076724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.077007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.077014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.077368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.077375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.077659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.077666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.077836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.077843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.078219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.078225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.078424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.078431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.078766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.078773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.079061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.079068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.079430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.079437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.284 [2024-11-26 19:31:32.079717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.284 [2024-11-26 19:31:32.079723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.284 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.080011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.080018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.080371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.080378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.080665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.080672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.080958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.080965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.081284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.081292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.081588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.081595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.081880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.081887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.082181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.082188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.082487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.082494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.082791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.082798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.083108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.083115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.083464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.083470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.083775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.083781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.084075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.084081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.084425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.084432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.084631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.084638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.084935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.084942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.085244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.085251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.085613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.085619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.085911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.085918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.086215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.086222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.086508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.086514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.086803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.086809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.087128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.087136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.087467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.087474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.087749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.087755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.088041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.088048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.088354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.088361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.088526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.088533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.088938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.088944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.089257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.089264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.089576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.089583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.089867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.089874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.090198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.090205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.090399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.090406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.090737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.090744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.091043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.091050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.091350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.285 [2024-11-26 19:31:32.091358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.285 qpair failed and we were unable to recover it. 00:24:58.285 [2024-11-26 19:31:32.091645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.091652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.091943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.091949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.092151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.092158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.092476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.092483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.092817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.092824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.093136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.093143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.093472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.093479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.093776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.093783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.093955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.093962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.094276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.094283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.094626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.094632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.094927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.094934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.095289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.095296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.095473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.095480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.095780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.095786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.096105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.096112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.096391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.096397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.096722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.096728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.096930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.096936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.097239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.097246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.097568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.097574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.097758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.097764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.098046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.098052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.098341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.098348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.098646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.098653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.098962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.098970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.099271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.099278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.099466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.099473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.099764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.099770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.099962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.099969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.100277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.100284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.100574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.100580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.100893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.100900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.101257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.101264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.101585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.101591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.101883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.101890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.102179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.102186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.286 [2024-11-26 19:31:32.102545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.286 [2024-11-26 19:31:32.102552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.286 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.102902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.102909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.103217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.103225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.103536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.103543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.103847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.103853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.104146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.104153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.104440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.104446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.104734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.104740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.105016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.105022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.105330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.105337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.105637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.105644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.105944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.105951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.106256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.106263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.106543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.106550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.106864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.106871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.107189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.107196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.107468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.107474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.107773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.107779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.108090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.108097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.108384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.108391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.108554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.108561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.108857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.108864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.287 [2024-11-26 19:31:32.109203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.287 [2024-11-26 19:31:32.109210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.287 qpair failed and we were unable to recover it. 00:24:58.562 [2024-11-26 19:31:32.109485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.562 [2024-11-26 19:31:32.109494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.562 qpair failed and we were unable to recover it. 00:24:58.562 [2024-11-26 19:31:32.109812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.562 [2024-11-26 19:31:32.109819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.562 qpair failed and we were unable to recover it. 00:24:58.562 [2024-11-26 19:31:32.110108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.562 [2024-11-26 19:31:32.110115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.562 qpair failed and we were unable to recover it. 00:24:58.562 [2024-11-26 19:31:32.110394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.110401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.110737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.110745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.111029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.111038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.111240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.111247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.111506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.111514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.111825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.111832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.112124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.112132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.112451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.112458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.112819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.112826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.113106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.113114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.113320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.113327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.113460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.113467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.113794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.113801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.114166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.114173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.114474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.114481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.114765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.114771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.115072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.115079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.115294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.115301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.115616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.115623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.115907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.115913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.116244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.116251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.116536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.116544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.116837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.116844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.117059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.117065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.117359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.117366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.117646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.117653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.117973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.117979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.118284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.118292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.118624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.118631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.118930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.118937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.119244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.119251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.119563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.119570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.119898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.119905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.120191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.120198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.120393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.120400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.120703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.120710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.121019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.121026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.121308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.121315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.121603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.121609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.121791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.121799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.122064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.122071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.122336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.122343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.122641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.122649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.122975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.122982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.123325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.123332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.123534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.123541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.123834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.123840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.124022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.124029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.124358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.124365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.124653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.124660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.124954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.124961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.125356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.125364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.125651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.125658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.126001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.126008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.126312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.126319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.126614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.126621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.126910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.126917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.127207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.127214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.127546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.127553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.127854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.127860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.128186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.128194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.128530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.128537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.128700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.128707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.128944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.128951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.129234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.129241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.129631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.129637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.130037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.130044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.130347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.130355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.130682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.130688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.130890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.130897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.563 [2024-11-26 19:31:32.131082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.563 [2024-11-26 19:31:32.131089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.563 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.131248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.131255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.131582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.131589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.131791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.131798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.132091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.132098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.132406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.132413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.132710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.132717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.133016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.133023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.133316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.133323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.133617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.133624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.133910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.133917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.134249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.134257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.134562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.134571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.134901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.134908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.135206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.135213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.135485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.135492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.135781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.135787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.135943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.135951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.136224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.136231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.136448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.136456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.136833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.136840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.137146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.137154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.137463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.137470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.137772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.137779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.137829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.137836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.138105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.138112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.138463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.138470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.138534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.138541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.138847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.138853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.139137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.139144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.139499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.139505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.139679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.139686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.140033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.140039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.140341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.140348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.140517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.140524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.140808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.140815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.141109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.141117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.141441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.141448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.141827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.564 [2024-11-26 19:31:32.141834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.564 qpair failed and we were unable to recover it. 00:24:58.564 [2024-11-26 19:31:32.142137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.142144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.142349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.142355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.142568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.142575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.142831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.142838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.143172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.143179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.143475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.143481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.143764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.143771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.144061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.144068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.144374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.144381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.144668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.144675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.144953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.144960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.145256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.145263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.145426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.145432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.145776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.145785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.146079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.146086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.146366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.146373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.146545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.146554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.146827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.146834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.147163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.147171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.147494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.147501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.147827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.147834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.148182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.148189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.148471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.148477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.148769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.148775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.149058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.149064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.149388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.149395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.149679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.149686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.150021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.150029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.150337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.150344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.150630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.150637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.150921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.150928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.151274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.151281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.151582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.151589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.151879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.151886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.152194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.152201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.152527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.152534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.152860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.152868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.153152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.565 [2024-11-26 19:31:32.153160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.565 qpair failed and we were unable to recover it. 00:24:58.565 [2024-11-26 19:31:32.153477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.153484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.153818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.153825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.153997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.154005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.154275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.154282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.154609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.154616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.154934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.154940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.155097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.155108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.155392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.155399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.155706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.155713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.155835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.155842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.156142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.156149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.156446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.156454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.156733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.156740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.157033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.157041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.157221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.157228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.157585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.157593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.157894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.157901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.158187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.158194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.158483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.158489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.158837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.158843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.159040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.159047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.159346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.159353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.159634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.159641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.159925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.159932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.160223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.160230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.160433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.160440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.160624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.160630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.160902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.160909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.161197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.161204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.161513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.161521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.161702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.161709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.161882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.161889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.162186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.566 [2024-11-26 19:31:32.162194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.566 qpair failed and we were unable to recover it. 00:24:58.566 [2024-11-26 19:31:32.162532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.162553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.162888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.162906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.163024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.163032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.163341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.163349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.163674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.163681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.163980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.163987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.164303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.164310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.164630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.164638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.164927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.164934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f602c000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Read completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 Write completed with error (sct=0, sc=8) 00:24:58.567 starting I/O failed 00:24:58.567 [2024-11-26 19:31:32.165140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:58.567 [2024-11-26 19:31:32.165324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.165340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.165677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.165685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.165996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.166003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.166179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.166187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.166547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.166554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.166750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.166757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.166984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.166990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.167143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.167150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.167476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.167483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.167838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.167845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.168180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.168188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.168515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.168522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.168825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.168832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.169141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.169149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.169471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.169478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.169821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.169829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.170237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.170245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.170540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.567 [2024-11-26 19:31:32.170548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.567 qpair failed and we were unable to recover it. 00:24:58.567 [2024-11-26 19:31:32.170859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.170866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.171074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.171081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.171431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.171438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.171631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.171638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.171926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.171933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.172260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.172268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.172568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.172575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.172870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.172877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.173194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.173201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.173478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.173485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.173814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.173821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.174154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.174161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.174454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.174462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.174680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.174688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.174985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.174992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.175294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.175304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.175632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.175639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.175937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.175943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.176190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.176197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.176506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.176513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.176587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.176594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.176784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.176791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.177157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.177472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.177479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.177774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.177781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.178120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.178128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.178325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.178332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.178502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.178509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.178915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.178922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.179236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.179244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.179548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.179555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.179848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.179855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.180156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.180164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.180467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.180474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.180821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.180828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.181116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.181123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.181408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.181415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.181699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.181706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.568 qpair failed and we were unable to recover it. 00:24:58.568 [2024-11-26 19:31:32.182082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.568 [2024-11-26 19:31:32.182089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.182403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.182410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.182704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.182711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.182996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.183003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.183300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.183307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.183679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.183686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.183879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.183887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.184159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.184167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.184350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.184357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.184662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.184669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.184984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.184991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.185312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.185320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.185652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.185659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.185846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.185853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.186159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.186168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.186485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.186492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.186784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.186791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.187104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.187114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.187197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.187204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.187420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.187429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.187754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.187761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.188049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.188057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.188336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.188344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.188525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.188533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.188844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.188852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.189123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.189131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.189426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.189433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.189719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.189726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.190013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.190020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.190301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.190308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.190528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.190536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.190852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.190859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.191139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.191146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.191517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.191524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.191701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.191708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.192007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.192015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.192210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.192218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.192526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.192533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.569 [2024-11-26 19:31:32.192788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.569 [2024-11-26 19:31:32.192795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.569 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.193113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.193121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.193390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.193397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.193691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.193698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.194024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.194032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.194326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.194333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.194611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.194618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.194908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.194915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.195223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.195230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.195483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.195490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.195791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.195798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.196146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.196153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.196452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.196459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.196750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.196757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.196910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.196918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.197259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.197267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.197597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.197604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.197833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.197840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.198161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.198168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.198517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.198525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.198822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.198829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.199128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.199136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.199446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.199453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.199698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.199705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.200005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.200013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.200357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.200364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.200635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.200642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.200812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.200819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.201119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.201127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.201414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.201421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.201632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.201639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.201963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.201970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.202278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.202285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.202573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.202580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.202895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.202901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.203261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.203268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.203448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.203455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.203750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.203756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.203941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.570 [2024-11-26 19:31:32.203947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.570 qpair failed and we were unable to recover it. 00:24:58.570 [2024-11-26 19:31:32.204291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.204298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.204510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.204517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.204801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.204808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.205116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.205123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.205425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.205432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.205724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.205731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.206016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.206022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.206342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.206349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.206667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.206674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.206992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.206999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.207318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.207324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.207666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.207673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.207955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.207962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.208342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.208349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.208656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.208663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.208947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.208954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.209109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.209116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.209368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.209375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.209638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.209645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.209903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.209910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.210219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.210229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.210549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.210556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.210848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.210855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.211134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.211141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.211434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.211441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.211729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.211736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.212031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.212037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.212349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.212356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.212652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.212659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.212953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.212959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.213265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.213272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.213563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.213570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.213851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.213857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.214148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.571 [2024-11-26 19:31:32.214155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.571 qpair failed and we were unable to recover it. 00:24:58.571 [2024-11-26 19:31:32.214457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.214464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.214845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.214852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.215177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.215184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.215495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.215502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.215838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.215845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.216138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.216145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.216420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.216427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.216716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.216722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.217031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.217038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.217343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.217350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.217631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.217638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.217938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.217945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.218252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.218259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.218572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.218579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.218879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.218886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.219171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.219178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.219466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.219473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.219769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.219775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.220144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.220152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.220338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.220345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.220539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.220546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.220809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.220816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.221092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.221098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.221378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.221385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.221585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.221592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.221760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.221766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.222139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.222147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.222490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.222496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.222823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.222830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.223119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.223126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.223394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.223401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.223713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.223719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.224052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.224058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.224383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.224390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.224672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.224679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.224982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.224989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.225283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.225290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.225490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.572 [2024-11-26 19:31:32.225497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.572 qpair failed and we were unable to recover it. 00:24:58.572 [2024-11-26 19:31:32.225825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.225832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.226094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.226105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.226429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.226436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.226729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.226735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.226901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.226908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.227159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.227166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.227480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.227486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.227797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.227804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.228081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.228088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.228301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.228308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.228644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.228650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.229069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.229075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.229394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.229402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.229737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.229744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.229932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.229939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.230213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.230220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.230520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.230526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.230852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.230859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.231144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.231151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.231440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.231446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.231773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.231779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.232090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.232097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.232346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.232353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.232693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.232700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.232990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.232997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.233329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.233337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.233639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.233645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.233968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.233975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.234216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.234225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.234441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.234447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.234823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.234829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.235138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.235144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.235439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.235446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.235739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.235746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.236036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.236043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.236361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.236368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.236683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.236690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.573 qpair failed and we were unable to recover it. 00:24:58.573 [2024-11-26 19:31:32.236865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.573 [2024-11-26 19:31:32.236872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.237142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.237149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.237405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.237412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.237725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.237732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.237907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.237914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.238205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.238212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.238415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.238425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.238693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.238699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.238964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.238971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.239353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.239360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.239677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.239684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.239880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.239887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.240188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.240195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.240489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.240496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.240782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.240788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.241086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.241093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.241386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.241393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.241688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.241695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.241993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.242000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.242356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.242363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.242648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.242655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.242939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.242946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.243114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.243121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.243334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.243340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.243591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.243598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.243862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.243868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.244243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.244250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.244554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.244561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.244917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.244924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.245101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.245108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.245394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.245400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.245702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.245711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.245996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.246003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.246308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.246316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.246619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.246626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.246914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.246921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.247219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.247225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.247438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.247444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.574 [2024-11-26 19:31:32.247771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.574 [2024-11-26 19:31:32.247777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.574 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.248093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.248101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.248402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.248409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.248579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.248586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.249013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.249019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.249319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.249326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.249663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.249670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.249924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.249932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.250212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.250219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.250500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.250507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.250811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.250818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.251127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.251134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.251501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.251507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.251780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.251787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.252090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.252097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.252467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.252474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.252774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.252781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.253110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.253117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.253387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.253394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.253712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.253719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.254001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.254008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.254323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.254330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.254627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.254634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.254797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.254804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.255094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.255103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.255412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.255418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.255702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.255709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.255997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.256004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.256370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.256378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.256691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.256697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.256987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.256994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.257347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.257354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.257651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.257658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.257845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.257853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.258132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.575 [2024-11-26 19:31:32.258139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.575 qpair failed and we were unable to recover it. 00:24:58.575 [2024-11-26 19:31:32.258451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.258457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.258738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.258744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.258995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.259002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.259294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.259301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.259610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.259617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.259823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.259829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.259947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.259953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.260252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.260258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.260478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.260484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.260758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.260765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.261064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.261070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.261338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.261345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.261642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.261648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.261961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.261968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.262269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.262276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.262599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.262606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.262898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.262905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.263265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.263272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.263552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.263558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.263852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.263859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.264186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.264193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.264508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.264515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.264800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.264806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.265105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.265112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.265475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.265482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.265761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.265769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.266057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.266064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.266372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.266379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.266736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.266742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.267050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.267057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.267394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.267401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.267685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.267692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.268029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.268035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.268351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.268358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.268643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.268649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.268952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.268959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.269145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.269152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.269448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.576 [2024-11-26 19:31:32.269454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.576 qpair failed and we were unable to recover it. 00:24:58.576 [2024-11-26 19:31:32.269637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.269645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.269916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.269923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.270219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.270226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.270521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.270527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.270821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.270828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.271115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.271122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.271418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.271425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.271720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.271726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.272017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.272024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.272209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.272216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.272387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.272393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.272708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.272715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.272871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.272879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.273146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.273153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.273466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.273473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.273745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.273752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.274045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.274052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.274306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.274313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.274619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.274626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.274913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.274919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.275206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.275213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.275519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.275526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.275810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.275817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.276011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.276018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.276303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.276310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.276609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.276616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.276900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.276907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.277212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.277219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.277409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.277415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.277689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.277696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.277873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.277880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.278127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.278134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.278433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.278440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.278649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.278656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.278941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.278948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.279261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.279268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.279454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.279460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.279731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.577 [2024-11-26 19:31:32.279737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.577 qpair failed and we were unable to recover it. 00:24:58.577 [2024-11-26 19:31:32.280038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.280044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.280342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.280349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.280672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.280680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.280991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.280998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.281281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.281288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.281578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.281584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.281871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.281878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.282242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.282249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.282535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.282541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.282868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.282874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.283211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.283218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.283492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.283499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.283795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.283802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.284092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.284098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.284428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.284435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.284721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.284728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.285024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.285031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.285212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.285219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.285429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.285436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.285735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.285741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.286026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.286032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.286286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.286293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.286614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.286620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.286918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.286925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.287080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.287087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.287432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.287439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.287773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.287779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.287983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.287990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.288195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.288201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.288534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.288541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.288757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.288764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.289088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.289095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.289395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.289402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.289565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.289572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.289918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.289925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.290109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.290117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.290402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.290410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.290579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.290587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.578 qpair failed and we were unable to recover it. 00:24:58.578 [2024-11-26 19:31:32.290882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.578 [2024-11-26 19:31:32.290889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.291164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.291171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.291517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.291524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.291802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.291809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.292104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.292113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.292518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.292524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.292903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.292909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.293216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.293224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.293535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.293542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.293827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.293834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.294169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.294177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.294568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.294576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.294769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.294777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.295096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.295106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.295377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.295384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.295576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.295582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.295934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.295941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.296224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.296230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.296532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.296538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.296876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.296884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.297191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.297198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.297553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.297560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.297852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.297859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.298160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.298168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.298449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.298456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.298667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.298674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.298997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.299004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.299350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.299357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.299707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.299713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.300025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.300032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.300373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.300380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.300714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.300722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.300921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.301087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.301094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.301395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.301402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.301677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.301684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.301821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.301827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.302120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.302127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.579 [2024-11-26 19:31:32.302427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.579 [2024-11-26 19:31:32.302434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.579 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.302720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.302727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.303015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.303021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.303297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.303304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.303610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.303617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.303898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.303905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.304101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.304109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.304393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.304400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.304598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.304605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.304884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.304890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.305202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.305209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.305514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.305521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.305829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.305836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.306173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.306180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.306499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.306506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.306861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.306868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.307155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.307163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.307464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.307470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.307772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.307779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.308063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.308070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.308435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.308442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.308795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.308801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.309122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.309129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.309393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.309400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.309712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.309718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.310022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.310029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.310332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.310340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.310623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.310631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.310944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.310951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.311238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.311247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.311546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.311555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.311761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.311771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.312071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.312078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.312444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.312453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.580 qpair failed and we were unable to recover it. 00:24:58.580 [2024-11-26 19:31:32.312792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.580 [2024-11-26 19:31:32.312799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.313081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.313088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.313386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.313394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.313675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.313683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.314028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.314035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.314324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.314331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.314618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.314625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.314960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.314966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.315277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.315285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.315587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.315594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.315754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.315761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.316040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.316047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.316357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.316364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.316657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.316664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.316859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.316866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.317179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.317187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.317461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.317469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.317767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.317774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.318083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.318090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.318413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.318421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.318711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.318718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.319005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.319011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.319329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.319337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.319646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.319653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.319948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.319955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.320190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.320198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.320493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.320500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.320803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.320811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.321200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.321208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.321513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.321519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.321878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.321885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.322175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.322182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.322473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.322479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.322764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.322772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.323062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.323070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.323380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.323387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.323675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.323682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.323993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.324000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.581 [2024-11-26 19:31:32.324294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.581 [2024-11-26 19:31:32.324301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.581 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.324498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.324506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.324802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.324809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.325096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.325105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.325426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.325434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.325760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.325766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.326149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.326157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.326534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.326541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.326859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.326866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.327153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.327160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.327453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.327460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.327624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.327631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.327902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.327910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.328208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.328216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.328511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.328518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.328838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.328844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.329134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.329141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.329458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.329464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.329661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.329668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.329986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.329993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.330355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.330362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.330687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.330694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.330979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.330986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.331351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.331359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.331649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.331656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.331941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.331947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.332244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.332252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.332421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.332428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.332730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.332737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.333063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.333070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.333265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.333273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.333542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.333548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.333854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.333861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.334156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.334162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.334467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.334473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.334774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.334780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.335122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.335129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.335477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.335486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.335780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.582 [2024-11-26 19:31:32.335787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.582 qpair failed and we were unable to recover it. 00:24:58.582 [2024-11-26 19:31:32.336096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.336105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.336284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.336291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.336549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.336558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.336860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.336867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.337245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.337253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.337546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.337552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.337858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.337865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.338181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.338189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.338483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.338489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.338775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.338782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.338944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.338951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.339137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.339145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.339433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.339439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.339611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.339618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.339896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.339903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.340208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.340215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.340485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.340492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.340694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.340701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.341018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.341025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.341322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.341330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.341628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.341635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.341940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.341947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.342293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.342300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.342644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.342651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.342955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.342962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.343311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.343320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.343631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.343639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.343955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.343963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.344247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.344253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.344552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.344558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.344856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.344862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.345148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.345155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.345471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.345478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.345764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.345771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.346065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.346071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.346380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.346387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.346674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.346681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.583 [2024-11-26 19:31:32.346977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.583 [2024-11-26 19:31:32.346983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.583 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.347352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.347360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.347634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.347640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.347831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.347839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.348071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.348079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.348387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.348397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.348576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.348583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.348733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.348740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.348907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.348914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.349240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.349247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.349537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.349544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.349741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.349747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.350080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.350087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.350374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.350381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.350686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.350693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.351015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.351022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.351199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.351206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.351431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.351437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.351784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.351791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.352108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.352115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.352458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.352465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.352757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.352764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.353112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.353120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.353485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.353491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.353818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.353825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.354178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.354185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.354480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.354487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.354767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.354774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.355067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.355073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.355430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.355437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.355724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.355731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.356022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.356029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.356334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.356341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.356691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.356698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.356888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.584 [2024-11-26 19:31:32.356895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.584 qpair failed and we were unable to recover it. 00:24:58.584 [2024-11-26 19:31:32.357219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.357226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.357483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.357490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.357814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.357821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.358112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.358119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.358490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.358497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.358790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.358797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.359110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.359117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.359430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.359437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.359744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.359751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.360057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.360064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.360415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.360424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.360641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.360648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.360918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.360925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.361266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.361273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.361554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.361561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.361847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.361854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.362157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.362164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.362479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.362486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.362680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.362687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.362870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.362876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.363073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.363079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.363371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.363378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.363677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.363684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.364007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.364014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.364168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.364175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.364462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.364469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.364784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.364790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.365081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.365088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.365422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.365429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.365644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.365651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.365977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.365983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.366243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.366250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.366570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.366577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.366870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.366878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.367162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.367170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.367476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.367482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.367785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.367792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.368020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.585 [2024-11-26 19:31:32.368027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.585 qpair failed and we were unable to recover it. 00:24:58.585 [2024-11-26 19:31:32.368305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.368312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.368631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.368639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.368913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.368920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.369290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.369297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.369622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.369628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.369907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.369914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.370179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.370186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.370489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.370496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.370806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.370814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.371145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.371152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.371439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.371445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.371742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.371748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.372070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.372078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.372369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.372376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.372557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.372563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.372865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.372872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.373038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.373045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.373519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.373527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.373835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.373841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.374012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.374019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.374288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.374295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.374460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.374467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.374781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.374788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.375054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.375060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.375374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.375381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.375720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.375727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.376012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.376020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.376323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.376331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.376530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.376537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.376802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.376809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.377084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.377090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.377457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.377465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.377755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.377762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.378089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.378096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.378389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.378395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.378577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.378584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.378812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.378818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.379164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.379172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.586 [2024-11-26 19:31:32.379487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.586 [2024-11-26 19:31:32.379494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.586 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.379804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.379813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.380094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.380102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.380381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.380387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.380583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.380590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.380919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.380925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.381229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.381236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.381565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.381571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.381860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.381867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.382195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.382202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.382503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.382509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.382679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.382686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.383043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.383050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.383339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.383346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.383633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.383641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.383930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.383937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.384288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.384296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.384576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.384583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.384912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.384918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.385208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.385215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.385521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.385528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.385816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.385823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.386111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.386119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.386434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.386440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.386752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.386759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.387058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.387065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.387336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.387344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.387619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.387627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.387685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.387692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.387987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.387994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.388336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.388343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.388600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.388607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.388877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.388884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.389209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.389217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.389531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.389538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.389835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.389841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.390110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.390117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.390476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.390483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.390768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.587 [2024-11-26 19:31:32.390776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.587 qpair failed and we were unable to recover it. 00:24:58.587 [2024-11-26 19:31:32.391061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.391068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.391371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.391378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.391688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.391695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.391781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.391787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.392074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.392080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.392447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.392455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.392758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.392764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.392935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.392942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.393242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.393249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.393547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.393554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.393835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.393843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.394002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.394009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.394297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.394304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.394588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.394594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.394890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.394897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.395205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.395213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.395524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.395531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.395845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.395853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.396154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.396160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.396467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.396473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.396825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.396832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.397118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.397126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.397467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.397474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.397793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.397799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.398093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.398104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.398452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.398459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.398748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.398755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.399041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.399047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.399244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.399251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.399576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.399582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.399889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.399896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.400184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.400192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.400555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.400562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.400866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.400873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.401069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.401077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.401384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.401391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.401704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.401712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.402046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.588 [2024-11-26 19:31:32.402054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.588 qpair failed and we were unable to recover it. 00:24:58.588 [2024-11-26 19:31:32.402364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.402372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.402658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.402665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.402864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.402871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.403177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.403184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.403494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.403503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.403798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.403805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.404094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.404106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.404398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.404405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.404774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.404782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.405122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.405130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.405461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.405467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.405850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.405857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.406129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.406137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.406516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.406523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.406813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.406820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.407031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.407039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.407440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.407447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.407722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.407731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.408111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.408118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.408396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.408403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.408734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.408741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.409045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.409051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.409374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.409381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.409536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.409543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.409814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.409820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.410032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.410039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.410257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.410265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.410564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.410571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.410892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.410898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.411175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.411183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.411461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.411468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.411862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.411869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.412210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.412217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.589 [2024-11-26 19:31:32.412403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.589 [2024-11-26 19:31:32.412411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.589 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.412739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.412748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.413040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.413047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.413237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.413244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.413436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.413443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.413771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.413779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.414079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.414086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.414356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.414363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.414645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.414652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.414939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.414947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.415292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.415548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.415556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.415860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.415867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.416240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.416247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.416519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.416526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.416841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.416848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.417142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.417149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.417424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.417432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.417873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.417880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.418178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-11-26 19:31:32.418185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-11-26 19:31:32.418541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.418548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.418841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.418848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.419155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.419163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.419486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.419493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.419698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.419706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.419888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.419894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.420111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.420118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.420397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.420404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.420702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.420710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.421037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.421044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.421353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.421361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.421677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.421684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.421856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.421862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.422147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.422155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.422452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.422459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.422599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.422607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.422974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.422980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.423192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.423199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.423493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.423500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.423701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.423708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.423899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.423905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.424173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.424181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.424443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.424450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.424718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.424725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.424979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.424986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.425366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.425373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.425698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.425704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.425888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.425895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.426166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.426173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.426533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.426540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.426727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.426733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.427082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.427089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.427277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.427284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.427469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.427475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-11-26 19:31:32.427660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-11-26 19:31:32.427667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.427986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.427993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.428313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.428320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.428612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.428619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.428954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.428961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.429337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.429344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.429615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.429622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.429762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.429769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.430041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.430048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.430231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.430238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.430545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.430555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.430904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.430912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.431237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.431245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.431543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.431550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.431900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.431907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.432099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.432109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.432296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.432303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.432640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.432647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.432955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.432963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.433162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.433169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.433498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.433506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.433855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.433862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.434209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.434216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.434533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.434540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.434926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.434933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.435230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.435237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.435472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.435479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.435777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.435784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.436063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.436070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.436267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.436275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.436542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.436549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.436759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-11-26 19:31:32.436766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-11-26 19:31:32.436992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.436998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.437300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.437307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.437559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.437566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.437756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.437762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.437937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.437945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.438240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.438247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.438546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.438553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.438742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.438748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.439060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.439066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.439350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.439358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.439662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.439669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.439995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.440002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.440208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.440215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.440629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.440636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.440905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.440913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.441128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.441135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.441457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.441463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.441793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.441800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.442133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.442142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.442418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.442425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.442769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.442776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.442963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.442970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.443329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.443337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.443621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.443627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.443918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.443926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.444163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.444170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.444495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.444502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.444795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.444802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.445068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.445074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.445377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.445385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.445729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.445736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.446044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.446051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-11-26 19:31:32.446350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-11-26 19:31:32.446357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.446662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.446669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.446971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.446977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.447144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.447152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.447312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.447319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.447480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.447487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.447720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.447727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.448038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.448045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.448236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.448243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.448429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.448435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.448853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.448859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.449231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.449237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.449545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.449552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.449724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.449730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.449975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.449981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.450285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.450292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.450596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.450603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.450857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.450864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.451189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.451196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.451488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.451495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.451792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.451798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.452082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.452088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.452421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.452428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.452712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.452718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.452891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.452898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.453140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.453147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.453394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.453402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.453712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.453719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.453884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.453891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.454087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.454094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.454266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.454273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.454503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.454510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.454647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-11-26 19:31:32.454654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-11-26 19:31:32.455028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.455136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.455497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.455533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.455872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.455902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6024000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.456142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.456151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.456322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.456328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.456533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.456539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.456851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.456858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.457049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.457056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.457420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.457427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.457596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.457602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.457788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.457795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.457951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.457957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.458266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.458273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.458629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.458636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.458801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.458808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.459093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.459101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.459297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.459304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.459646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.459652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.459934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.459941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.460298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.460305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.460467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.460477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.460796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.460802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.461046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.461052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.461253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.461260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.461567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.461574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.461745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.461752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.462074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-11-26 19:31:32.462080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-11-26 19:31:32.462245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.462251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.462575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.462581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.462867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.462875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.463123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.463130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.463414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.463420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.463601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.463608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.463941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.463948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.464120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.464127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.464405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.464412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.464682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.464690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.464999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.465006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.465338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.465345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.465644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.465651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.465942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.465948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.466176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.466183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.466517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.466524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.466809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.466815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.467132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.467138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.467421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.467427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.467726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.467733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.468106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.468114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.468399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.468406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.468570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.468577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.468845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.468852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.469200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.469207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.469472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.469480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.469641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.469648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.469986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.469992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.470370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.470377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.470643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.470649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.471030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.471037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.471445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.471452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.471766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.471772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-11-26 19:31:32.472094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-11-26 19:31:32.472104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.472391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.472398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.472708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.472715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.472988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.472995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.473339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.473346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.473655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.473662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.473932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.473939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.474131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.474139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.474516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.474523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.474801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.474807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.475002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.475009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.475277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.475285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.475584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.475591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.475851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.475858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.476176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.476183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.476532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.476539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.476827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.476834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.477124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.477131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.477438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.477444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.477640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.477647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.477921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.477927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.478238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.478246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.478557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.478563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.478857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.478864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.479154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.479161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.479432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.479438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.479739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.479747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.479956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.479963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.480278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.480285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.480561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.480568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.480854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.480861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-11-26 19:31:32.481154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-11-26 19:31:32.481161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.481475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.481482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.481805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.481811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.482099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.482108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.482470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.482476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.482651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.482658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.482824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.482831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.483158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.483165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.483345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.483351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.483675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.483683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.484025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.484032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.484379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.484386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.484731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.484737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.485041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.485047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.485351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.485358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.485702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.485709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.485890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.485897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.486186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.486194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.486511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.486518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.486806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.486813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.487122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.487129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.487438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.487445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.487793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.487800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.488143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.488150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.488473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.488480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.488790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.488797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.489078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.489084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.489306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.489313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.489592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.489599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.489888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.489895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.490248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.490255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.490536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.490543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-11-26 19:31:32.490891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-11-26 19:31:32.490897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.491059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.491066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.491366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.491374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.491642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.491649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.491969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.491976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.492262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.492269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.492640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.492647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.492932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.492938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.493237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.493243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.493540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.493547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.493863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.493870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.494043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.494051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.494212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.494220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.494485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.494493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.494743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.494750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.495075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.495082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.495327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.495334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.495637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.495645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.495940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.495946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.496239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.496246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.496549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.496555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.496764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.496772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.497106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.497113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.497388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.497394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.497679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.497686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.497862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.497869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.498140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.498147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.498338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.498346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.498616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.498623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.499012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.499019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.499317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.499324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.499700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.499707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-11-26 19:31:32.500001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-11-26 19:31:32.500008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.500317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.500324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.500651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.500658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.500982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.500989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.501168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.501175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.501467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.501474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.501781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.501788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.502079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.502085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.502387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.502393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.502578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.502585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.502824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.502831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.503123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.503130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.503418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.503424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.503767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.503774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.504051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.504058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.504365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.504373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.504688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.504695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.505016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.505022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.505351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.505358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.505662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.505669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.505835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.505843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.506180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.506187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.506447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.506453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.506772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.506779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.507127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.507134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.507470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.507478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.507677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.507684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.507978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.507985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.508202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.508209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.508483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.508489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.508641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.508648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.509000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.509007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-11-26 19:31:32.509319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-11-26 19:31:32.509326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.509637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.509643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.509947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.509953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.510249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.510256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.510549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.510557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.510857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.510864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.511156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.511164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.511472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.511479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.511776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.511782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.512090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.512097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.512300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.512307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.512482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.512489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.512563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.512570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.512738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.512744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.513001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.513007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.513236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.513244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.513558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.513565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.513927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.513934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.514239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.514246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.514457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.514464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.514809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.514816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.515094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.515109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.515484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.515491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.515841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.515848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.516131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.516139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.516539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.516546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.516846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.516853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.517077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.517084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-11-26 19:31:32.517411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-11-26 19:31:32.517417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.517617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.517623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.517800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.517807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.518069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.518076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.518313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.518320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.518585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.518595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.518872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.518879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.519155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.519162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.519404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.519412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.519703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.519710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.520014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.520021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.520371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.520378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.520599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.520606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.520927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.520933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.521129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.521136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.521305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.521312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.521587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.521594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.521893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.521899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.522082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.522089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.522456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.522463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.522770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.522777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.523079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.523085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.523440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.523447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.523745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.523751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.524054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.524061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.524402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.524409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.524610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.524617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.524986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.524993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.525314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.525321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.525598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.525604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.525731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.525737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.526024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-11-26 19:31:32.526031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-11-26 19:31:32.526399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.526406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.526737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.526743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.527075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.527082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.527301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.527308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.527632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.527638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.527936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.527943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.528258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.528265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.528542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.528548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.528823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.528830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.529116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.529122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.529410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.529417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.529596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.529603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.529985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.529992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.530305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.530313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.530497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.530504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.530854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.530860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.531050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.531056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.531137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.531143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.531462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.531468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.531874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.531881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.532185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.532193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.532444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.532451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.532757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.532764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.533086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.533092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.533473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.533480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.533791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.533798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.534171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.534178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.534504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.534511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.534790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.534796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-11-26 19:31:32.535118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-11-26 19:31:32.535125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.535400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.535407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.535586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.535593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.535790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.535796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.536089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.536096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.536399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.536406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.536746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.536752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.536967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.536973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.537268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.537275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.537604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.537611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.537777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.537784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.538005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.538012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.538321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.538328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.538672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.538679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.538994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.539001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.539203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.539210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.539663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.539670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.539957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.539964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.540288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.540296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.540507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.540514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.540824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.540831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.541127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.541134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.541490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.541497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.541861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.541868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.542177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.542186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.542424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.542431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.542719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.542726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.543083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.543090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.543390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.543397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.543735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.543742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.544015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.544022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-11-26 19:31:32.544333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-11-26 19:31:32.544340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.544669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.544676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.544730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.544736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.545030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.545037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.545204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.545211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.545539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.545546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.545795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.545801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.546135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.546142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.546463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.546470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.546779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.546785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.547086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.547094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.547466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.547473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.547726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.547733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.548054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.548060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.548369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.548376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.548676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.548683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.548963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.548970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.549178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.549185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.549471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.549478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.549822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.549829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.550123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.550130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.550315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.550321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.550596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.550603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.550778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.550785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.550978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.550985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.551316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.551323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.551618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.551625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.551786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.551793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.552152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.552159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.552462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.552468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.552646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-11-26 19:31:32.552653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-11-26 19:31:32.552948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.552955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.553146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.553153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.553248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.553256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.553606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.553612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.553840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.553846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.554140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.554146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.554492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.554499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.554776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.554783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.555063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.555070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.555453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.555460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.555728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.555735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.556002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.556009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.556302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.556309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.556593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.556599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.556880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.556887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.557236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.557244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.557549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.557557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.557843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.557850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.558032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.558038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.558389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.558396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.558700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.558706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.558990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.558997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.559392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.559399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.559600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.559607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.559898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.559905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.560215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-11-26 19:31:32.560222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-11-26 19:31:32.560524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.560532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.560800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.560806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.561102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.561109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.561368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.561374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.561580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.561586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.561920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.561926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.562083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.562091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.562398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.562405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.562714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.562721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.562874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.562882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.563216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.563223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.563438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.563445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.563748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.563756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.564069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.564076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.564451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.564458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.564747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.564754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.565042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.565050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.565352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.565360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.565666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.565673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.565902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.565908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.566234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.566241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.566562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.566569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.566856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.566862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.567173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.567181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.567483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.567489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.567789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.567796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.568084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.568091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.568277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.568284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.568609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.568617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.568964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-11-26 19:31:32.568971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-11-26 19:31:32.569225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.569232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.569532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.569539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.569845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.569852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.570132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.570139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.570446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.570454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.570740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.570747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.570917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.570924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.571116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.571124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.571400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.571407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.571672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.571679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.571941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.571947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.572261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.572268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.572578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.572586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.572908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.572916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.573237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.573244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.573317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.573323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.573606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.573613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.573773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.573779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.574059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.574066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.574340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.574348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.574648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.574656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.574971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.574977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.575325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.575332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.575639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.575646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.575926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.575934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.576245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.576252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.576618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-11-26 19:31:32.576627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-11-26 19:31:32.576892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.576898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.577285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.577294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.577593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.577599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.577894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.577901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.578044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.578051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.578369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.578382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.578691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.578697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.578984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.578991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.579169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.579176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.579326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.579334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.579630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.579637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.579912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.579919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.580209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.580216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.580424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.580431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.580721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.580728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.581029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.581036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.581350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.581356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.581646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.581653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.581846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.581852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.582110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.582117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.582399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.582406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.582725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.582732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.583045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.583052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.583451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.583458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.583629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.583636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.583950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-11-26 19:31:32.583957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-11-26 19:31:32.584187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.584194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.584515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.584522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.584806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.584812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.585020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.585026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.585295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.585303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.585714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.585721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.585993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.586000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.586170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.586177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.586519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.586525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.586803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.586809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.587119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.587125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.587535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.587542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.587862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.587869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.588134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.588142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.588468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.588475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.588773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.588781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.589079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.589086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.589137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.589145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.589433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.589439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.589717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.589724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.590036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.590043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.590221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.590228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.590396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.590403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.590692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.590700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.591003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.591009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.591215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-11-26 19:31:32.591222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-11-26 19:31:32.591580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.591587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.591884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.591891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.592128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.592135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.592418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.592425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.592712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.592718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.593052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.593059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.593332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.593340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.593636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.593643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.593913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.593920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.594191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.594198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.594493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.594500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.594652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.594658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.595023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.595030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.595332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.595338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.595519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.595527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.595813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.595820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.596108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.596115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.596411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.596418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.596713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.596720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.597014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.597020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.597229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.597237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.597539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.597546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.597818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.597825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.598129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.598136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.598527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.598534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.598801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-11-26 19:31:32.598808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-11-26 19:31:32.599139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.599146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.599346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.599352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.599644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.599651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.599923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.599930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.600248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.600256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.600458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.600465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.600825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.600832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.601139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.601146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.601341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.601348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.601565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.601572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.601912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.601919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.602237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.602245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.602522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.602528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.602912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.602918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.603211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.603218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.603370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.603377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.603714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.603721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.604018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.604025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.604305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.604312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.604592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.604599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.604895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.604902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.605166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.605173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.605486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.605493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.605697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.605704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.606019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.606026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-11-26 19:31:32.606294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-11-26 19:31:32.606301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.606635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.606642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.606917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.606924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.607246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.607254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.607476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.607484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.607691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.607698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.607985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.607992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.608274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.608281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.608603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.608611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.608892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.608899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.609670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.609970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.609978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.610284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.610291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.610549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.610555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.610722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.610729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.610963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.610969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.611329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.611337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.611491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.611498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.611842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.611848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.612023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.612029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.612351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.612359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.612656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.612664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.613049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-11-26 19:31:32.613056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-11-26 19:31:32.613348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.613355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.613571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.613578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.613743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.613750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.614071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.614078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.614445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.614452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.614655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.614662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.614955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.614962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.615321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.615328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.615651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.615659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.615966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.615973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.616182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.616189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.616408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.616415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.616729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.616736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.617061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.617068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.617381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.617389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.617676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.617682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.617961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.617967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.618352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.618359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.618659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.618666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.619050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.619056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.619352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.619361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.619700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.619707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.619992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.619999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.620361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.620368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.620538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.620545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.888 qpair failed and we were unable to recover it. 00:24:58.888 [2024-11-26 19:31:32.620822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.888 [2024-11-26 19:31:32.620829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.621126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.621134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.621555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.621562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.621874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.621881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.622206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.622213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.622503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.622510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.622789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.622796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.623085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.623091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.623460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.623468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.623753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.623759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.624098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.624108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.624471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.624478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.624758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.624765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.625063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.625069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.625387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.625394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.625700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.625708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.625993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.625999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.626216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.626222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.626552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.626559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.626897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.626905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.627184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.627191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.627355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.627362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.627776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.627782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.628166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.628174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.889 qpair failed and we were unable to recover it. 00:24:58.889 [2024-11-26 19:31:32.628460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.889 [2024-11-26 19:31:32.628467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.628642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.628650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.628846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.628853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.629115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.629122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.629467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.629474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.629799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.629806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.630098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.630109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.630393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.630400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.630720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.630727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.630992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.630999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.631321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.631328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.631612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.631620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.631928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.631935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.632209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.632216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.632413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.632419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.632745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.632752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.633019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.633026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.633296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.633303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.633485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.633492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.633846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.633853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.634140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.634147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.634456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.634463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.634751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.634758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.635064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.635071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.635409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.635416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.890 [2024-11-26 19:31:32.635693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.890 [2024-11-26 19:31:32.635700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.890 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.636005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.636012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.636316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.636323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.636617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.636625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.637006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.637012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.637317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.637324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.637658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.637664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.637833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.637840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.638396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.638445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.638715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.638732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.639069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.639083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.639550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.639599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.639931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.639939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.640356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.640384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.640692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.640701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.641026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.641034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.641336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.641343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.641712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.641719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.642009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.642017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.642323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.642330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.642651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.642658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.642978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.642985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.643205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.643212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.643536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.643544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.643850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.643857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.644176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.644183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.644508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.891 [2024-11-26 19:31:32.644518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.891 qpair failed and we were unable to recover it. 00:24:58.891 [2024-11-26 19:31:32.644803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.644810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.644985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.644992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.645300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.645309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.645605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.645613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.645932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.645939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.646226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.646233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.646514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.646522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.646834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.646841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.647130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.647137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.647307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.647314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.647611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.647618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.647917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.647924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.648219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.648227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.648533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.648540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.648858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.648864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.649030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.649037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.649409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.649416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.649736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.649744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.650051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.650059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.650369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.650376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.650647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.650654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.650853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.650859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.651131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.651138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.651220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.651227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.651494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.651500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.651796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.651803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.651979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.651986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.652305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.652313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.652660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.652667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.653025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.653032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.653378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.653386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.653690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.653697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.654000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.654007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.654325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.654333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.654640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.654647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.654945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.654952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.655160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.655167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.892 [2024-11-26 19:31:32.655428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.892 [2024-11-26 19:31:32.655436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.892 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.655763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.655770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.655948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.655957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.656144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.656151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.656453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.656460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.656771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.656779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.657104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.657111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.657411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.657417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.657593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.657600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.657959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.657966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.658260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.658267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.658568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.658576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.658754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.658763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.659072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.659079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.659442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.659449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.659737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.659744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.660082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.660382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.660390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.660697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.660704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.661009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.661016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.661325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.661332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.661702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.661709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.661780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.661786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.662091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.662098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.662284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.662291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.662595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.662602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.662904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.662910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.663167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.663175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.663391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.663397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.663760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.663768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.664112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.664120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.664418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.664426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.893 [2024-11-26 19:31:32.664610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.893 [2024-11-26 19:31:32.664617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.893 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.664922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.664928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.665142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.665149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.665478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.665485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.665798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.665805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.666105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.666112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.666250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.666257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.666582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.666589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.666771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.666778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.667095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.667105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.667193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.667203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.667616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.667623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.667891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.667899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.668171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.668178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.668461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.668468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.668787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.668794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.668943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.668950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.669321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.669328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.669659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.669665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.669991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.669998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.670379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.670386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.670745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.670753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.670915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.670923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.671075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.671083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.671414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.671422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.671768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.671775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.672051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.672058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.672133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.672140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.672439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.672446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.672838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.672845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.673166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.673173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.673555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.673562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.673876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.673884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.674199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.674206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.674499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.674506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.674859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.674866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.675184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.675191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.675458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.675465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.675775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.894 [2024-11-26 19:31:32.675781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.894 qpair failed and we were unable to recover it. 00:24:58.894 [2024-11-26 19:31:32.676070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.676077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.676482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.676489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.676769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.676776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.676971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.676978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.677163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.677171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.677406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.677413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.677610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.677617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.677656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.677663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.677825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.677832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.678149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.678157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.678369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.678376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.678639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.678648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.678931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.678938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.679276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.679283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.679593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.679601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.679892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.679899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.680277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.680284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.680601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.680608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.680930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.680938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.681116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.681123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.681489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.681496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.681793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.681800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.682107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.682114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.682556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.682563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.682831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.682838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.683161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.683169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.683378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.683384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.683677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.683684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.683975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.683982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.684356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.684363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.684690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.684696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.684910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.684916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.685260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.685267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.685543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.685550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.685714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.685721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.686094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.686110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.686587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.686604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.686887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.895 [2024-11-26 19:31:32.686895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.895 qpair failed and we were unable to recover it. 00:24:58.895 [2024-11-26 19:31:32.687133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.687141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.687459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.687466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.687748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.687755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.688055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.688061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.688128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.688135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.688461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.688468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.688662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.688669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.688825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.688833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.689240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.689247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.689689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.689696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.689865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.689872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.690202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.690209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.690410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.690417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.690686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.690695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.690884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.690892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.691151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.691158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.691467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.691474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.691863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.691870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.692048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.692055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.692292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.692299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.692606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.692613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.692903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.692910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.693064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.693072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.693380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.693389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.693675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.693683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.693960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.693968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.694292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.694299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.694580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.694586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.694859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.694866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.695179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.695186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.695554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.695561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.695890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.695896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.696045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.696051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.696148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.696156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.696373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.696380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.696534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.696541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.696840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.696847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.697225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.896 [2024-11-26 19:31:32.697233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.896 qpair failed and we were unable to recover it. 00:24:58.896 [2024-11-26 19:31:32.697423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.697430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.697721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.697728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.698069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.698076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.698369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.698376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.698733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.698741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.699024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.699030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.699333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.699341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.699535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.699542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.699710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.699716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.699917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.699924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.700322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.700330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.700377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.700385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.700671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.700678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.700967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.700974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.701277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.701284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.701579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.701588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.701789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.701795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.702070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.702077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.702290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.702297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.702572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.702578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.702757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.702767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.703072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.703079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.703430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.703437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.703738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.703746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.704009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.704016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.704404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.704411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.704684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.704690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.704970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.704976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.705172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.705178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.705477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.705485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.705694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.705700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.706025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.706032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.706219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.706226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.706520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.897 [2024-11-26 19:31:32.706527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.897 qpair failed and we were unable to recover it. 00:24:58.897 [2024-11-26 19:31:32.706836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.706843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.707024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.707031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.707401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.707408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.707667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.707674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.708007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.708014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.708263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.708271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.708566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.708573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.708894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.708901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.709055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.709063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.709425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.709433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.709635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.709641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.709913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.709919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.710106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.710115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.710389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.710396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.710711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.710717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.711003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.711010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.711310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.711317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.711629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.711636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.712433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.712450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.712752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.712760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.713020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.713027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.713353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.713363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.713548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.713556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:58.898 [2024-11-26 19:31:32.713852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.898 [2024-11-26 19:31:32.713859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:58.898 qpair failed and we were unable to recover it. 00:24:59.177 [2024-11-26 19:31:32.714228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-11-26 19:31:32.714237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-11-26 19:31:32.714950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-11-26 19:31:32.714964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-11-26 19:31:32.715592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-11-26 19:31:32.715606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-11-26 19:31:32.715881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-11-26 19:31:32.715891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-11-26 19:31:32.716191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-11-26 19:31:32.716199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-11-26 19:31:32.716505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-11-26 19:31:32.716512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-11-26 19:31:32.716695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.716703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.717011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.717018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.717409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.717417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.717584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.717592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.717859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.717867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.718051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.718058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.718516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.718523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.718839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.718846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.719047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.719054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.719248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.719256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.719589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.719596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.719884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.719891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.720233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.720240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.720538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.720545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.720867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.720874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.721127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.721134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.721532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.721539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.721838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.721845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.722176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.722185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.722377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.722383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.722766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.722773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.722986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.722993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.723290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.723298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.723569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.723576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.723721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.723728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.724033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.724040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.724241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.724249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.724557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.724565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.724892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.724899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.725213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.725221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.725422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.725428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.725764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.725771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.725970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.725978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.726377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.726384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.726590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.726597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.726902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.726909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.727178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-11-26 19:31:32.727186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-11-26 19:31:32.727497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.727504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.727675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.727682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.727927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.727935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.728118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.728125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.728422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.728429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.728737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.728744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.729022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.729029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.729351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.729359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.729649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.729656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.729834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.729841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.730036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.730044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.730356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.730363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.730629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.730637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.730998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.731005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.731299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.731306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.731472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.731479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.731802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.731809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.732170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.732178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.732377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.732383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.732544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.732551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.732823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.732829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.733124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.733133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.733416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.733423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.733670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.733677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.733984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.733991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.734149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.734156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.734540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.734547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.734888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.734894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.735186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.735193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.735509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.735515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.735819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.735826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.736143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.736151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.736480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.736487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.736785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.736792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.737228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.737235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.737504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.737510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.737809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.737816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-11-26 19:31:32.738134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-11-26 19:31:32.738141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.738431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.738438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.738633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.738640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.738931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.738938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.739465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.739473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.739785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.739792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.740104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.740111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.740356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.740363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.740517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.740524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.740890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.740897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.741124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.741131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.741429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.741436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.741807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.741814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.742209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.742216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.742534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.742541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.742620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.742626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.742981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.742988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.743312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.743319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.743657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.743663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.743810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.743817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.744121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.744128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.744329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.744336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.744782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.744789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.744952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.744959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.745395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.745404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.745735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.745742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.745881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.745888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.746189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.746197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.746440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.746446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.746752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.746758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.747050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.747057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.747337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.747344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.747670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.747677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.747903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.747910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.748313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.748320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.748520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.748527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.748843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.748850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.749161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.749174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-11-26 19:31:32.749502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-11-26 19:31:32.749509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.749804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.749811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.750126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.750134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.750293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.750300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.750575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.750582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.750732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.750739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.751049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.751056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.751121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.751128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.751425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.751431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.751761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.751767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.751944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.751951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.752200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.752207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.752383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.752390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.752743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.752750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.753038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.753045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.753459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.753466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.753540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.753547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.753814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.753821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.754096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.754106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.754435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.754441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.754756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.754763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.754978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.754985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.755148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.755156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.755511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.755517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.755731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.755738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.756061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.756068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.756460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.756469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.756727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.756733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.757053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.757060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.757359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.757366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.757466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.757472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.757557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.757563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.757784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.757791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.758051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.758058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.758525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.758532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.758808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.758815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.759061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.759068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-11-26 19:31:32.759424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-11-26 19:31:32.759432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.759739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.759745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.760013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.760020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.760268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.760275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.760550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.760556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.760846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.760853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.761155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.761162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.761555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.761561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.761873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.761880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.762161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.762168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.762483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.762490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.762675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.762682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.762907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.762914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.763067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.763074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.763359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.763367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.763676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.763684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.763972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.763980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.764307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.764315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.764480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.764486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.764699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.764706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.764871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.764879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.765190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.765197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.765515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.765522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.765792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.765799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.766106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.766114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.766461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.766468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.766737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.766744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.767036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.767043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.767419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.767426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.767620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.767629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.767948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.767955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.768275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.768282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.768595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-11-26 19:31:32.768602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-11-26 19:31:32.768905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.768911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.769198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.769206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.769506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.769513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.769781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.769787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.769966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.769972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.770309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.770316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.770588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.770595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.770733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.770740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.771044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.771052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.771344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.771351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.771525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.771532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.771707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.771715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.771994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.772001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.772325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.772332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.772608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.772616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.772909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.772915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.773200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.773207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.773396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.773403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.773726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.773733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.774032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.774039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.774294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.774301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.774665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.774671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.774821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.774828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.775165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.775174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.775473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.775480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.775789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.775796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.776066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.776072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.776352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.776360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.776655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.776662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.776952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.776959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.777277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.777284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.777596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.777603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.777895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.777902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.778211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.778219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.778537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.778544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.778828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.778835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.779121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.779129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.779412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-11-26 19:31:32.779419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-11-26 19:31:32.779717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.779725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.780126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.780133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.780417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.780424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.780752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.780759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.781032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.781039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.781353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.781361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.781679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.781686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.781972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.781979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.782284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.782291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.782583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.782590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.782896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.782903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.783179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.783186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.783498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.783504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.783757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.783764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.784067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.784074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.784356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.784364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.784641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.784648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.784939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.784946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.785134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.785141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.785448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.785454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.785736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.785743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.786110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.786118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.786394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.786400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.786558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.786564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.786795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.786802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.787092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.787101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.787393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.787399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.787695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.787703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.788005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.788012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.788314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.788321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.788652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.788659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.788982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.788989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.789270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.789277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.789587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.789594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.789939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.789946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.790239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.790246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.790410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.790418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.790714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.790721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-11-26 19:31:32.791015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-11-26 19:31:32.791024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.791308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.791316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.791489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.791497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.791832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.791838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.792111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.792118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.792405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.792412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.792711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.792717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.792856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.792864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.793202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.793209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.793504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.793511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.793834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.793841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.794133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.794139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.794437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.794444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.794744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.794751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.795030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.795037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.795322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.795329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.795634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.795642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.795979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.795987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.796270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.796277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.796579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.796585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.796872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.796878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.797223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.797231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.797393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.797400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.797661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.797668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.797967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.797973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.798125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.798133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.798478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.798484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.798775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.798783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.799066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.799072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.799490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.799497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.799812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.799818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.800127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.800134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.800440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.800447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.800734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.800741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.801007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.801014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.801213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.801220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.801530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.801537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.801837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-11-26 19:31:32.801843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-11-26 19:31:32.802136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.802145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.802282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.802289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.802480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.802488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.802725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.802733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.803115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.803122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.803392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.803399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.803709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.803716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.804002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.804008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.804175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.804183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.804534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.804541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.804693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.804700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.804986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.804993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.805292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.805301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.805687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.805693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.806017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.806024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.806210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.806218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.806563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.806569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.806862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.806869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.807029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.807037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.807411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.807418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.807726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.807733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.808018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.808025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.808210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.808217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.808543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.808550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.808885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.808892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.809063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.809070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.809410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.809417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.809701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.809708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.809892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.809898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.810611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.810627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.811262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.811276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.812062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.812078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.812785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.812800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.812989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.812996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.813211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.813218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-11-26 19:31:32.813586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-11-26 19:31:32.813593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.813780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.813787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.814105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.814112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.814266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.814274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.814554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.814562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.814881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.814887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.815190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.815197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.815497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.815505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.815675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.815682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.815998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.816005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.816353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.816360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.816692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.816699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.817035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.817041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.817238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.817245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.817560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.817567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.817859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.817865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.818031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.818037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.818314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.818321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.818656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.818664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.818824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.818832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.819135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.819143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.819453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.819460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.819774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.819782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.820085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.820092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.820408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.820416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.820708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.820714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.821061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.821067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.821268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.821275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.821607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.821614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.821910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.821918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.822237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.822244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.822539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.822546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.822892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.822899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.823186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.823194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.823504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.823511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.823814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.823821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.824158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-11-26 19:31:32.824165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-11-26 19:31:32.824476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.824483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.824779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.824786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.825073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.825080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.825379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.825386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.825677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.825684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.825991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.825998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.826157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.826164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.826535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.826542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.826852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.826859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.827037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.827045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.827246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1657030 is same with the state(6) to be set 00:24:59.188 [2024-11-26 19:31:32.827968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.828032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.828580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.828643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x165a490 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.828991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.828999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.829303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.829310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.829680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.829688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.830044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.830050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.830268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.830276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.830586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.830593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.830753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.830760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.831077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.831083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.831460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.831468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.831796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.831804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.832116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.832123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.832456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.832464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.832761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.832768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.833063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.833070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.833391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.833398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.833737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.833744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.834056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.834063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.834424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.834432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.834753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.834760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.834912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.834921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.835210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.835218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.835526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.835533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.835844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.835851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.836150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.836157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.836464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.836471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.836793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-11-26 19:31:32.836799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-11-26 19:31:32.837001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.837007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.837302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.837309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.837686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.837694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.837895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.837902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.838219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.838227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.838404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.838411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.838740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.838747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.839066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.839073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.839408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.839415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.839594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.839600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.839870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.839876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.840180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.840187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.840578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.840585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.840902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.840909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.841208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.841215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.841542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.841549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.841876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.841883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.842095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.842105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.842398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.842404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.842700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.842706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.843074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.843080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.843372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.843379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.843582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.843589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.843899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.843905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.844105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.844112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.844403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.844412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.844748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.844755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.845029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.845036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.845381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.845388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.845728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.845735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.846021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.846028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.846316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.846323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.846616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.846622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.846931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.847228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.847235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.847535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.847542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.847847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-11-26 19:31:32.847854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-11-26 19:31:32.848141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.848148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.848328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.848335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.848688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.848695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.849008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.849015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.849169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.849177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.849475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.849482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.849847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.849854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.850183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.850190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.850522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.850529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.850827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.850834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.851120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.851128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.851437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.851444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.851747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.851754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.852083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.852090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.852385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.852392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.852685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.852693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.852987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.852994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.853279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.853287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.853589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.853596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.853794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.853801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.854130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.854138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.854433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.854441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.854741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.854748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.854958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.854965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.855295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.855302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.855581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.855588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.855886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.855893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.856172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.856179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.856487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.856495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.856798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.856806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.857109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.857117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.857323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.857330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.857632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.857639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.857926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.857934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.858235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.858242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.858560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.858566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.858938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.858944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.859241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-11-26 19:31:32.859248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-11-26 19:31:32.859519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.859526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.859794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.859802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.860104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.860111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.860668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.860679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.860963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.860971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.861257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.861265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.861563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.861569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.861897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.861905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.862252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.862260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.862573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.862580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.862930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.862937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.863299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.863306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.863628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.863635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.863952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.863959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.864251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.864258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.864421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.864429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.864768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.864775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.865086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.865093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.865369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.865376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.865742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.865749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.866042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.866049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.866367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.866374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.866700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.866707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.866998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.867004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.867238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.867245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.867597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.867604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.867894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.867901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.868117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.868124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.868296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.868303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.868640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.868647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.868973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.868981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.869332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.869339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.869543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-11-26 19:31:32.869550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-11-26 19:31:32.869902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.869909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.870226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.870233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.870589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.870596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.870906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.870913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.871227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.871234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.871447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.871453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.871781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.871788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.871995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.872002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.872336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.872343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.872703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.872710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.872864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.872871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.873185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.873192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.873351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.873358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.873628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.873635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.874013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.874020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.874311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.874319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.874575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.874582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.874879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.874886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.875168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.875175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.875472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.875480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.875768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.875775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.876065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.876073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.876387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.876394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.876700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.876707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.876898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.876905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.877207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.877215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.877486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.877493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.877821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.877828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.878130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.878137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.878407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.878414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.878546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.878553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.878829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.878836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.879082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.879089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.879299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.879306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.879606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.879613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.879900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.879907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.880228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.880235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.880542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.880550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-11-26 19:31:32.880888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-11-26 19:31:32.880895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.881154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.881161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.881485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.881492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.881781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.881789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.882071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.882078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.882380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.882388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.882682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.882689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.882988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.882995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.883292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.883299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.883607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.883614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.883944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.883951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.884258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.884266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.884547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.884554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.884902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.884909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.885223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.885230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.885530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.885537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.885882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.885889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.886161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.886168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.886403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.886410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.886710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.886716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.887040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.887047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.887355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.887362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.887652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.887659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.887970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.887976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.888300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.888306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.888657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.888664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.888965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.888972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.889162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.889169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.889360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.889367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.889715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.889722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.890008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.890015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.890406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.890413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.890598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.890605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.890887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.890894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.891218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.891226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-11-26 19:31:32.891440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.891448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3903671 Killed "${NVMF_APP[@]}" "$@" 00:24:59.193 [2024-11-26 19:31:32.891766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.891773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:24:59.193 [2024-11-26 19:31:32.891946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-11-26 19:31:32.891953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:59.194 [2024-11-26 19:31:32.892249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:59.194 [2024-11-26 19:31:32.892256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.194 [2024-11-26 19:31:32.892450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.892457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.194 [2024-11-26 19:31:32.892842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.892849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.893138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.893145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.893502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.893509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.893807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.893814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.894117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.894124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.894406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.894413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.894574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.894581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.894885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.894892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.895219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.895227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.895573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.895579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.895724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.895732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.895934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.895940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.896266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.896273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.896443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.896449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.896791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.896798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.897132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.897139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.897443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.897450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.897737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.897744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.898048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.898055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.898349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.898356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.898509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.898516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.898696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.898703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3904860 00:24:59.194 [2024-11-26 19:31:32.898923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.898930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3904860 00:24:59.194 [2024-11-26 19:31:32.899243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.899251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3904860 ']' 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.194 [2024-11-26 19:31:32.899561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.899568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:59.194 [2024-11-26 19:31:32.899909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.899917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 19:31:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:59.194 [2024-11-26 19:31:32.900235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.900243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.900560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.900567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.900869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.900876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-11-26 19:31:32.901184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-11-26 19:31:32.901192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.901505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.901512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.901843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.901851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.902145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.902153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.902483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.902491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.902553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.902561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.902813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.902820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.903113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.903121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.903504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.903512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.903669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.903676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.903973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.903980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.904289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.904297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.904618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.904625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.904791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.904799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.905003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.905011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.905287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.905295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.905591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.905599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.905873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.905880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.906112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.906120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.906399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.906406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.906732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.906739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.907025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.907032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.907359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.907367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.907680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.907688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.907987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.907994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.908325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.908333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.908639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.908646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.908953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.908960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.909269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.909276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.909440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.909447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.909728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.909735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.910052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.910059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.910429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.910436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.910728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.910735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.910887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.910895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.911173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.911181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.911378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.911385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.911659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.911666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-11-26 19:31:32.911991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-11-26 19:31:32.911999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.912184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.912191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.912379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.912385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.912724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.912731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.912909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.912915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.913153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.913160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.913421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.913429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.913744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.913752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.913932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.913940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.914226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.914233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.914579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.914586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.914757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.914764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.915109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.915116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.915416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.915423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.915732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.915739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.915966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.915973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.916297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.916305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.916635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.916643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.916956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.916964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.917122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.917130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.917460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.917468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.917783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.917790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.918121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.918128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.918468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.918474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.918824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.918830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.919134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.919142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.919464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.919470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.919787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.919794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.920104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.920111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.920520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.920527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.920819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.920826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.921224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.921232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.921548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.921555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.921876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-11-26 19:31:32.921883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-11-26 19:31:32.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.922087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.922501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.922508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.922812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.922819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.923122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.923129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.923326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.923333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.923673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.923680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.923991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.923998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.924326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.924334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.924619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.924626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.924925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.924933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.925206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.925213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.925539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.925546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.925858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.925865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.926196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.926204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.926388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.926395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.926711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.926717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.926897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.926904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.927228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.927235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.927543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.927550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.927880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.927888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.928089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.928096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.928418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.928425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.928606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.928613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.928896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.928902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.929227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.929237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.929459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.929466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.929696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.929704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.929893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.929900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.930340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.930347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.930645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.930652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.930842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.930850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.931203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.931210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.931504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.931511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.931828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.931835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.932012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.932019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.932319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.932326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.932714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.932721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.933025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-11-26 19:31:32.933033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-11-26 19:31:32.933359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.933366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.933689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.933696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.933995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.934002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.934391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.934399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.934707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.934714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.935058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.935065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.935397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.935404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.935764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.935771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.936084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.936091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.936419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.936427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.936740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.936748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.937048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.937055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.937373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.937380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.937665] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:24:59.198 [2024-11-26 19:31:32.937695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.937704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.937709] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:59.198 [2024-11-26 19:31:32.938029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.938037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.938365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.938373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.938718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.938725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.939069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.939077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.939406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.939414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.939593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.939600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.939938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.939945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.940219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.940226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.940543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.940550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.940702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.940709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.940992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.940999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.941188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.941197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.941531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.941538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.941856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.941864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.942029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.942036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.942206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.942213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.942569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.942576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.942892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.942899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.943098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.943113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.943424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.943432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.943614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.943621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.943942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.943949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.944245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-11-26 19:31:32.944253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-11-26 19:31:32.944541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.944549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.944876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.944884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.945201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.945209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.945510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.945518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.945810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.945818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.946129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.946136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.946352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.946359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.946672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.946679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.946885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.946892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.947200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.947207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.947519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.947526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.947836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.947843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.948146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.948153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.948478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.948485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.948803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.948810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.949010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.949018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.949180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.949188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.949368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.949376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.949637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.949645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.949982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.949989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.950206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.950214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.950518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.950525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.950901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.950908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.951204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.951211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.951532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.951540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.951845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.951853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.952034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.952041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.952374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.952381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.952709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.952717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.952939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.952947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.953275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.953282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.953587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.953594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.953773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.953779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.953871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.953877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.954178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.954185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.954363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.954370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.954721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.954728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.955031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-11-26 19:31:32.955038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-11-26 19:31:32.955357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.955364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.955662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.955668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.955986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.955993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.956325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.956332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.956690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.956697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.956994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.957001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.957199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.957206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.957552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.957559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.957879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.957886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.958171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.958178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.958358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.958365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.958677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.958684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.958978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.958985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.959363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.959370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.959529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.959537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.959921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.959928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.960246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.960254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.960585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.960592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.960940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.960946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.961280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.961288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.961564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.961571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.961874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.961881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.962084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.962091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.962415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.962422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.962620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.962628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.962928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.962935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.963259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.963266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.963612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.963619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.963724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.963730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.963921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.963928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.964237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.964246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.964549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.964556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.964859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.964865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.965183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.965190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-11-26 19:31:32.965524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-11-26 19:31:32.965531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.965666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.965673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.965990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.965997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.966191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.966198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.966507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.966513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.966696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.966703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.967016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.967023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.967329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.967336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.967504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.967512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.967856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.967863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.968168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.968175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.968389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.968396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.968724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.968731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.969026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.969033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.969339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.969346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.969692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.969698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.970039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.970046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.970340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.970348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.970677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.970684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.971034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.971040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.971328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.971335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.971513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.971520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.971832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.971839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.972126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.972133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.972423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.972430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.972781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.972788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.973090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.973097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.973427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.973434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.973769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.973776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.974066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.974073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.974268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.974275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.974592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.974600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.974923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.974930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-11-26 19:31:32.975245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-11-26 19:31:32.975252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.975582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.975589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.975891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.975898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.976071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.976080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.976433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.976441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.976735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.976742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.976910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.976917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.977223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.977231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.977413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.977419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.977790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.977797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.978098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.978106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.978421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.978429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.978771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.978778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.979065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.979072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.979387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.979395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.979686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.979695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.979855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.979862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.979918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.979924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.980278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.980285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.980593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.980600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.980895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.980903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.981219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.981226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.981537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.981544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.981837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.981844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.982156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.982163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.982509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.982515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.982806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.982813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.983115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.983122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.983428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.983435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.983800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.983807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.983949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.983956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.984160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.984167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.984222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.984229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.984513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.984520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.984863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.984870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.985166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.985173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.985473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.985480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.985863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-11-26 19:31:32.985869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-11-26 19:31:32.986203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.986210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.986510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.986516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.986785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.986792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.987116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.987123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.987446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.987453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.987773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.987781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.988081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.988088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.988395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.988402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.988556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.988564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.988926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.988933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.989222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.989229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.989474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.989481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.989825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.989831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.990049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.990058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.990421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.990428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.990744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.990750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.991041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.991047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.991364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.991370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.991544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.991551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.991893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.991900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.992101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.992109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.992379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.992386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.992673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.992680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.992864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.992871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.993165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.993172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.993447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.993454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.993819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.993826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.994154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.994161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.994518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.994525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.994676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.994683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.995019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.995026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.995351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.995358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.995662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.995669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.995705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.995712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.996032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.996038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.996385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.996392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.996672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.996679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.996968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.996975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-11-26 19:31:32.997277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-11-26 19:31:32.997284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.997654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.997661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.997845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.997852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.998141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.998148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.998464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.998470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.998758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.998764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.999062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.999068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.999236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.999245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.999526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.999533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:32.999865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:32.999872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.000166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.000173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.000517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.000524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.000813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.000820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.001135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.001142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.001451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.001458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.001770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.001777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.002088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.002095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.002424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.002431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.002733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.002740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.003046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.003053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.003235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.003243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.003521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.003528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.003819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.003826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.004118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.004125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.004441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.004448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.004800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.004807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.004987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.004994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.005214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.005221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.005419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.005426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.005683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.005689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.005997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.006004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.006299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.006305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.006444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.006451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.006804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.006810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.007110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.007118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.007422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.007428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.007722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.007729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.008019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.008026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-11-26 19:31:33.008329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-11-26 19:31:33.008337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.008521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.008528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.008799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.008805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.009110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.009117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.009273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.009280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.009523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.009529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.009888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.009895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.009934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.009941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.010290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.010297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.010591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.010599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.010899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.010906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.011175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.011182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.011474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.011481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.011792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.011799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.012136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.012143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.012454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.012461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.012662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.012668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.012987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.012993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.013359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.013366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.013664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.013671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.013879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.013887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.014196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.014203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.014406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.014413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.014742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.014749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.015105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.015112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.015485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.015492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.015740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.015747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.016041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.016047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.016231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.016239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.016582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.016589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.016899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.016906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.017078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.017084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.017454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.017461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.017791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.017797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-11-26 19:31:33.017966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-11-26 19:31:33.017973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.018383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.018390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.018685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.018692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.019010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.019016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.019215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.019221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.019573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.019580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.019736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.019743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.020077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.020084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.020350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.020357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.020653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.020660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.020977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.020983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.021219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.021226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.021568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.021574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-11-26 19:31:33.021889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-11-26 19:31:33.021896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.022254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.022263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.022536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.022545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.022851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.022858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.023170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.023177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.023272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.484 [2024-11-26 19:31:33.023534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.023541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.023846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.023854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.024164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.024171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.024466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.024473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.024856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.024864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.025072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.025079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.025383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.025390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.025446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.025453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.025599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.025606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.025796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.025803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.026161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.026170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.026325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.026333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.026566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.026573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.026840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.026848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.027168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.027175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.027564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.027572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.027746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.027753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.028093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.484 [2024-11-26 19:31:33.028110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.484 qpair failed and we were unable to recover it. 00:24:59.484 [2024-11-26 19:31:33.028306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.028313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.028421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.028429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.028617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.028625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.028970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.028978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.029279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.029286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.029481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.029488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.029796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.029803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.030173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.030181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.030359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.030366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.030667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.030674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.031007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.031014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.031328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.031335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.031641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.031648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.031825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.031833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.032139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.032147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.032435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.032442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.032747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.032755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.033071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.033078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.033369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.033376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.033680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.033687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.034024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.034030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.034437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.034444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.034751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.034758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.035052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.035059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.035449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.035457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.035644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.035651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.035843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.035851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.036164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.036172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.036340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.485 [2024-11-26 19:31:33.036347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.485 qpair failed and we were unable to recover it. 00:24:59.485 [2024-11-26 19:31:33.036644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.036651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.037019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.037026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.037326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.037334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.037649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.037658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.037964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.037971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.038153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.038160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.038363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.038370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.038658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.038666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.039001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.039008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.039196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.039203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.039532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.039539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.039932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.039939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.040275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.040282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.040629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.040636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.040940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.040948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.041114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.041122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.041432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.041439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.041641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.041648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.041872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.041879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.042195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.042202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.042510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.042517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.042787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.042794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.043097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.043106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.043555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.043562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.043852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.043859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.044170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.044177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.044500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.044507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.044830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.044837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.486 [2024-11-26 19:31:33.045031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.486 [2024-11-26 19:31:33.045039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.486 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.045340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.045347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.045539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.045547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.045710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.045718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.046075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.046082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.046130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.046137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.046298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.046305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.046650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.046656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.046991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.046998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.047366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.047373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.047669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.047676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.047989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.047996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.048338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.048345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.048517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.048525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.048843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.048851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.049016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.049022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.049376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.049384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.049727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.049734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.050032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.050038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.050350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.050357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.050677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.050684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.050884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.050891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.051214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.051222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.051533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.051540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.051858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.051866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.052164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.052172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.052467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.052473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.052808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.052816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.053110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.053117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.487 [2024-11-26 19:31:33.053474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.487 [2024-11-26 19:31:33.053482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.487 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.053821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.053829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.054122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.054130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.054596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.054602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.054917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.054923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.055229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.055236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.055529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.055536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.055681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.055689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.055975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.055982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.056356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.056363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.056655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.056663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.056951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.056958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.057295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.057303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.057587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.057596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.057885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.057892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.058098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.058108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.058414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.058421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.058743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.058749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.058802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.488 [2024-11-26 19:31:33.058829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.488 [2024-11-26 19:31:33.058836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.488 [2024-11-26 19:31:33.058843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.488 [2024-11-26 19:31:33.058848] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.488 [2024-11-26 19:31:33.059042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.059050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.059364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.059371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.059538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.059545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.059883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.059889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.060177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.060184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.060442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:59.488 [2024-11-26 19:31:33.060587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.060595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.060594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:59.488 [2024-11-26 19:31:33.060749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:59.488 [2024-11-26 19:31:33.060750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:24:59.488 [2024-11-26 19:31:33.060895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.488 [2024-11-26 19:31:33.060903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.488 qpair failed and we were unable to recover it. 00:24:59.488 [2024-11-26 19:31:33.061078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.061084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.061416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.061424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.061724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.061731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.061925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.061933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.062253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.062260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.062437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.062444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.062821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.062828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.062997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.063004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.063299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.063307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.063486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.063492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.063891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.063899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.064006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.064013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.064224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.064232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.064556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.064563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.064854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.064861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.065167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.065175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.065381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.065388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.065700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.065707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.065873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.065880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.066266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.066274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.066549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.066555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.066855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.066862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.067192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.067199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.067503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.067510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.067824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.067831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.068010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.068018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.068193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.068200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.068479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.068486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.068676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.068683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.069012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.069019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.489 [2024-11-26 19:31:33.069324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.489 [2024-11-26 19:31:33.069331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.489 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.069518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.069527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.069840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.069847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.070219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.070227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.070515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.070522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.070828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.070835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.071142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.071150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.071381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.071388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.071563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.071573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.071845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.071852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.072019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.072026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.072468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.072476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.072769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.072776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.073114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.073121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.073411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.073418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.073724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.073731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.074024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.074038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.074213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.074221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.074397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.074403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.074675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.074681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.075008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.075015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.075322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.075329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.075640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.075647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.075942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.490 [2024-11-26 19:31:33.075949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.490 qpair failed and we were unable to recover it. 00:24:59.490 [2024-11-26 19:31:33.076134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.076141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.076353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.076361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.076662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.076669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.076909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.076916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.077227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.077234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.077436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.077443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.077751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.077758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.078141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.078148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.078535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.078542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.078861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.078868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.079240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.079248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.079562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.079570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.079741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.079748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.080057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.080065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.080355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.080362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.080654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.080661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.080978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.080985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.081296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.081304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.081641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.081649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.081845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.081853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.082115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.082122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.082281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.082287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.082510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.082518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.082873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.082881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.083053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.083062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.083390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.083398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.083557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.083565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.083922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.083929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.084244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.084252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.084451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.084458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.084761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.084769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.491 [2024-11-26 19:31:33.085060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.491 [2024-11-26 19:31:33.085068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.491 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.085396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.085404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.085718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.085725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.085920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.085928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.086247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.086255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.086535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.086543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.086909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.086917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.087107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.087114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.087404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.087411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.087573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.087580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.087887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.087895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.088197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.088204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.088578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.088585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.088928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.088935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.088986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.088993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.089289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.089296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.089465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.089471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.089643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.089650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.089984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.089991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.090325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.090332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.090369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.090375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.090525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.090532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.090724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.090730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.091079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.091085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.091430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.091437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.091624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.091630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.091958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.091965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.092121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.092128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.092446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.092453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.092622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.092630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.092815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.492 [2024-11-26 19:31:33.092822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.492 qpair failed and we were unable to recover it. 00:24:59.492 [2024-11-26 19:31:33.093126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.093134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.093493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.093501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.093856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.093866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.094047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.094053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.094240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.094247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.094588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.094595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.094897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.094904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.095098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.095108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.095258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.095265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.095608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.095615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.095908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.095915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.096076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.096083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.096376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.096384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.096720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.096727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.097061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.097068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.097443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.097450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.097622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.097629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.097828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.097835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.098184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.098192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.098494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.098501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.098703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.098710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.099019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.099026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.099208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.099215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.099618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.099625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.099912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.099919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.100107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.100114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.100488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.100495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.100797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.100804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.100997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.101004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.101212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.493 [2024-11-26 19:31:33.101220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.493 qpair failed and we were unable to recover it. 00:24:59.493 [2024-11-26 19:31:33.101571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.101578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.101728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.101734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.102032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.102039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.102332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.102339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.102508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.102514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.102697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.102704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.102887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.102894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.103211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.103218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.103428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.103435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.103718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.103725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.104016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.104023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.104336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.104344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.104633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.104641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.104824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.104831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.105042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.105049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.105238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.105246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.105540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.105546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.105889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.105896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.106185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.106192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.106374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.106381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.106670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.106676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.106850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.106856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.107174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.107181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.107501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.107508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.107803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.107810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.107985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.107992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.108264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.108271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.108575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.108581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.494 [2024-11-26 19:31:33.108879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.494 [2024-11-26 19:31:33.108886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.494 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.109116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.109124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.109318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.109325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.109367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.109373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.109670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.109677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.109835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.109842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.110204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.110211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.110423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.110430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.110742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.110749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.110927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.110933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.111292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.111299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.111589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.111596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.111781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.111787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.112010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.112016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.112298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.112306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.112643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.112650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.112954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.112961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.113261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.113268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.113592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.113599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.113888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.113895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.114189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.114196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.114525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.114531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.114652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.114659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.114986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.114993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.115295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.115304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.115638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.115645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.115942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.495 [2024-11-26 19:31:33.115948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.495 qpair failed and we were unable to recover it. 00:24:59.495 [2024-11-26 19:31:33.116122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.116129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.116479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.116486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.116681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.116688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.117008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.117014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.117199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.117206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.117478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.117484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.117665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.117672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.117829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.117836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.118139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.118146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.118328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.118335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.118684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.118691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.118742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.118748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.118939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.118946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.119122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.119129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.119510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.119516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.119811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.119818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.120122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.120129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.120320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.120328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.120689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.120695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.120979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.120986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.121153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.121160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.121454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.121461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.121664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.121671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.121980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.121987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.122154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.122161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.122514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.122521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.122829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.496 [2024-11-26 19:31:33.122835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.496 qpair failed and we were unable to recover it. 00:24:59.496 [2024-11-26 19:31:33.123130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.123137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.123449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.123456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.123749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.123756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.123944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.123951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.124162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.124170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.124445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.124452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.124630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.124637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.124801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.124808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.125076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.125083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.125263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.125270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.125647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.125654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.125970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.125977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.126185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.126192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.126576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.126583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.126763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.126770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.127122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.127129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.127316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.127322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.127626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.127633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.127922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.127928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.128085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.128091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.128290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.128297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.128636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.128643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.128958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.128965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.129336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.129343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.129659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.129666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.129960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.129966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.130251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.130258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.130463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.497 [2024-11-26 19:31:33.130469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.497 qpair failed and we were unable to recover it. 00:24:59.497 [2024-11-26 19:31:33.130793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.130800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.131093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.131107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.131274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.131281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.131434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.131441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.131794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.131801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.132095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.132104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.132470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.132476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.132801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.132808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.133174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.133181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.133410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.133419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.133751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.133757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.133807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.133814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.133874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.133881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.134175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.134182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.134369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.134375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.134551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.134558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.134975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.134981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.135299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.135305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.135595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.135601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.135914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.135921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.135962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.135968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.136186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.136193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.136607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.136614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.136783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.136790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.137081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.137088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.137379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.137387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.137685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.137692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.137859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.137867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.138046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.138053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.498 [2024-11-26 19:31:33.138298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.498 [2024-11-26 19:31:33.138305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.498 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.138610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.138617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.138930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.138936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.139220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.139227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.139410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.139416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.139479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.139485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.139537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.139543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.139747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.139754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.140080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.140087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.140423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.140429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.140749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.140756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.141057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.141064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.141362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.141369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.141559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.141566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.141738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.141745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.141920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.141927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.142214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.142221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.142409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.142415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.142595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.142602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.142947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.142954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.143232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.143241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.143423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.143430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.143576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.143583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.143846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.143853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.144149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.144156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.144472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.144479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.144782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.144788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.145098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.145108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.145399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.145406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.145819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.499 [2024-11-26 19:31:33.145825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.499 qpair failed and we were unable to recover it. 00:24:59.499 [2024-11-26 19:31:33.146137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.146145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.146324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.146330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.146624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.146631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.146919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.146926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.147225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.147232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.147570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.147576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.147733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.147740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.147952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.147959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.148271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.148277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.148585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.148591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.148766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.148773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.148996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.149002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.149316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.149323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.149608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.149615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.149785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.149792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.150128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.150135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.150412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.150418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.150619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.150626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.150806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.150813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.151167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.151174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.151358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.151364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.151671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.151677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.152019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.152026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.152326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.152333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.152373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.152379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.152597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.152604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.152784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.152790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.153128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.153135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.153410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.153417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.153671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.153677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.500 qpair failed and we were unable to recover it. 00:24:59.500 [2024-11-26 19:31:33.153828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.500 [2024-11-26 19:31:33.153837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.154166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.154174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.154536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.154542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.154717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.154724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.154970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.154977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.155265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.155272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.155548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.155554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.155772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.155778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.156088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.156095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.156388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.156396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.156704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.156710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.157025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.157032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.157196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.157203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.157419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.157427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.157732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.157739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.158025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.158033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.158317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.158325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.158628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.158635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.158967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.158973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.159140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.159147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.159393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.159400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.159593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.159600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.159921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.159927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.160108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.160114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.501 [2024-11-26 19:31:33.160459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.501 [2024-11-26 19:31:33.160466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.501 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.160754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.160762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.160956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.160963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.161302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.161309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.161660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.161667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.161945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.161952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.162266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.162274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.162586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.162593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.162761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.162767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.163113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.163120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.163471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.163478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.163654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.163661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.164026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.164033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.164191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.164199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.164599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.164606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.164909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.164916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.165260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.165269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.165601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.165608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.165965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.165972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.166272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.166280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.166452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.166458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.166508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.166515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.166714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.166721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.166898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.166905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.167226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.167234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.167570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.167578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.167873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.167881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.168267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.168274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.168436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.168443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.168711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.168718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.168903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.168910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.502 [2024-11-26 19:31:33.169265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.502 [2024-11-26 19:31:33.169273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.502 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.169448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.169455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.169632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.169640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.169943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.169950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.170142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.170149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.170523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.170530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.170847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.170854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.171050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.171057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.171260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.171268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.171552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.171559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.171716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.171723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.172001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.172008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.172047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.172055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.172339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.172346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.172531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.172539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.172891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.172898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.173191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.173199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.173487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.173494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.173786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.173793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.174138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.174146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.174473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.174479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.174777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.174784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.175069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.175076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.175463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.175471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.175532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.175540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.175868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.175880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.176183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.176191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.176373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.176380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.176700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.503 [2024-11-26 19:31:33.176707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.503 qpair failed and we were unable to recover it. 00:24:59.503 [2024-11-26 19:31:33.176870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.176877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.177214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.177221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.177380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.177387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.177757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.177764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.177921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.177928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.178090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.178097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.178451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.178458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.178835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.178842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.179206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.179213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.179536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.179543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.179830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.179837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.180012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.180019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.180377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.180385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.180677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.180684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.180897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.180904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.180942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.180950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.181127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.181134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.181325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.181333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.181497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.181504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.181803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.181810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.182123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.182130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.182445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.182452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.182755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.182762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.183167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.183175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.183568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.183576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.183741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.183748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.184102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.184110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.184403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.184410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.184757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.504 [2024-11-26 19:31:33.184764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.504 qpair failed and we were unable to recover it. 00:24:59.504 [2024-11-26 19:31:33.184973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.184980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.185314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.185322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.185657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.185664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.185944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.185952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.186320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.186327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.186617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.186624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.186796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.186803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.187162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.187172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.187347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.187354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.187683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.187690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.187852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.187859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.188176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.188183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.188555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.188562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.188737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.188745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.188960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.188967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.189117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.189124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.189336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.189343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.189653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.189660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.189866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.189873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.190062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.190069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.190258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.190266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.190575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.190582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.190758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.190765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.191173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.191180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.191471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.191477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.191681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.191688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.191996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.192002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.192172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.192179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.192476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.192483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.192786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.505 [2024-11-26 19:31:33.192792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.505 qpair failed and we were unable to recover it. 00:24:59.505 [2024-11-26 19:31:33.192859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.192865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.193017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.193024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.193341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.193348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.193718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.193725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.193767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.193773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.194054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.194061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.194363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.194370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.194532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.194539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.194800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.194807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.195108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.195115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.195330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.195337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.195515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.195522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.195742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.195748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.196040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.196047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.196239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.196246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.196550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.196557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.196857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.196864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.197167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.197175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.197488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.197495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.197670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.197677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.197886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.197892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.197929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.197935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.198190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.198197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.198499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.198506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.198799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.198806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.199089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.199096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.199279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.199286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.199623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.199630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.199920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.199927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.200229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.200236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.506 [2024-11-26 19:31:33.200535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.506 [2024-11-26 19:31:33.200541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.506 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.200833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.200840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.201141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.201148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.201459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.201465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.201767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.201774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.202062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.202069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.202404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.202411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.202597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.202603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.202767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.202774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.202942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.202948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.203237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.203244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.203424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.203431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.203609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.203615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.203955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.203961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.204215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.204223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.204559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.204566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.204763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.204770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.205066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.205073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.205238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.205245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.205469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.205475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.205693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.205699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.206011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.206018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.206332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.206339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.206635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.206642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.507 qpair failed and we were unable to recover it. 00:24:59.507 [2024-11-26 19:31:33.206960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.507 [2024-11-26 19:31:33.206966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.207157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.207164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.207500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.207507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.207678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.207687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.207995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.208002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.208310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.208317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.208519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.208525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.208859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.208866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.209210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.209217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.209592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.209599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.209782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.209789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.210129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.210136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.210323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.210330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.210624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.210631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.210943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.210949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.211244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.211251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.211453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.211460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.211804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.211811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.212032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.212038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.212264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.212271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.212556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.212562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.212738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.212744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.213052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.213059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.213361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.213368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.213540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.213547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.213885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.213892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.214184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.214190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.214389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.214396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.214738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.508 [2024-11-26 19:31:33.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.508 qpair failed and we were unable to recover it. 00:24:59.508 [2024-11-26 19:31:33.215057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.215064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.215236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.215243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.215480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.215487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.215662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.215668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.215868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.215874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.216154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.216161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.216489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.216496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.216808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.216815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.217107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.217114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.217402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.217409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.217723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.217729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.217766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.217772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.218121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.218128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.218310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.218317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.218617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.218626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.218971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.218978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.219333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.219340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.219498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.219505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.219738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.219745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.220072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.220079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.220395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.220403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.220766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.220774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.221103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.221110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.221285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.221292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.221587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.221593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.221777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.221784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.221944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.221951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.222142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.222149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.509 [2024-11-26 19:31:33.222188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.509 [2024-11-26 19:31:33.222195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.509 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.222526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.222532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.222855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.222862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.223167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.223174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.223462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.223469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.223791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.223798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.223990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.223996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.224334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.224342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.224545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.224552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.224845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.224852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.225163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.225170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.225474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.225481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.225850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.225857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.226143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.226150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.226330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.226336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.226712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.226719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.227009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.227016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.227324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.227331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.227364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.227370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.227668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.227675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.227941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.227948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.228234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.228241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.228562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.228568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.228747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.228754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.229071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.229077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.229471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.229478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.229769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.229785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.229984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.229991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.230337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.230344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-11-26 19:31:33.230519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-11-26 19:31:33.230525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.230895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.230902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.231059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.231065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.231338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.231345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.231509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.231516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.231698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.231704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.232050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.232057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.232340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.232347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.232383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.232389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.232566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.232573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.232902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.232909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.233072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.233079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.233397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.233404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.233701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.233707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.234025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.234031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.234335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.234342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.234636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.234643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.234961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.234968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.235272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.235279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.235449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.235455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.235812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.235819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.236162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.236169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.236470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.236476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.236766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.236772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.236956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.236963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.237289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.237296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.237476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.237483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.237635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.237641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.237946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.237953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.238274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.238281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-11-26 19:31:33.238610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-11-26 19:31:33.238617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.238920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.238927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.239225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.239231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.239519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.239525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.239829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.239836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.240017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.240024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.240205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.240211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.240401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.240409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.240692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.240699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.240901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.240908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.241098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.241108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.241337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.241344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.241643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.241650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.241954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.241961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.242258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.242264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.242574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.242580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.242764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.242771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.242971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.242978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.243340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.243346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.243517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.243523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.243881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.243888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.243926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.243933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.244218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.244225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.244474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.244481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.244787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.244793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.245094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.245103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.245274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.245280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.245435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.245442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.245627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.245634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.245974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.245980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.246173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.246180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-11-26 19:31:33.246213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-11-26 19:31:33.246219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.246528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.246534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.246883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.246890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.247165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.247172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.247479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.247485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.247830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.247837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.248122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.248129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.248432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.248439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.248607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.248613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.248967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.248974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.249143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.249150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.249463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.249470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.249757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.249763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.250065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.250072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.250380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.250388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.250695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.250701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.250988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.250997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.251033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.251039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.251404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.251411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.251627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.251634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.251949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.251955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.252265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.252273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.252483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.252490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.252531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-11-26 19:31:33.252537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-11-26 19:31:33.252834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.252840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.253041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.253048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.253425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.253432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.253549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.253556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.253821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.253828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.254034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.254040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.254344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.254351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.254653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.254660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.254692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.254699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.255049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.255056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.255393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.255399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.255599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.255606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.255906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.255913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.256280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.256287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.256463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.256470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.256761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.256767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.257059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.257066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.257232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.257239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.257506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.257513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.257857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.257864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.258142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.258149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.258300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.258307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.258506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.258513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.258732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.258738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.259013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.259019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.259364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.259371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.259753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.259759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.259962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-11-26 19:31:33.259969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-11-26 19:31:33.260255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.260262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.260564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.260571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.260902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.260909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.261065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.261072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.261355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.261362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.261684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.261691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.261992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.261998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.262181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.262191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.262373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.262379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.262687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.262694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.262990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.262997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.263262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.263269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.263563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.263570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.263769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.263775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.264105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.264112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.264392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.264399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.264689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.264696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.264968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.264975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.265293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.265300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.265612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.265619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.265928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.265935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.266206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.266213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.266501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.266508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.266885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.266891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.267268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.267275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.267432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.267439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.267760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.267767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.268098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.268107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-11-26 19:31:33.268425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-11-26 19:31:33.268431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.268788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.268795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.269091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.269097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.269460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.269468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.269793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.269800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.270092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.270099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.270496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.270503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.270778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.270785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.270937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.270943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.271141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.271148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.271180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.271187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.271471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.271477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.271634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.271641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.271907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.271913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.272134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.272141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.272458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.272465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.272638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.272644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.272869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.272876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.273206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.273213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.273409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.273741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.273748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.273944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.273951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.274269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.274276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.274444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.274451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.274768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.274774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.275105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.275112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.275269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.275276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.275503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.275510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.275806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.275812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-11-26 19:31:33.275996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-11-26 19:31:33.276003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.276302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.276309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.276464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.276471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.276780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.276787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.276968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.276974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.277264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.277270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.277547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.277554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.277586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.277592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.277961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.277968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.278271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.278277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.278626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.278632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.278946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.278953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.279168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.279174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.279355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.279361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.279640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.279648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.280018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.280025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.280210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.280217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.280539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.280547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.280847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.280853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.281150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.281157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.281491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.281498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.281775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.281782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.281965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.281972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.282292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.282299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.282627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.282633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.282952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.282958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.283293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.283300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.283621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.283628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-11-26 19:31:33.283933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-11-26 19:31:33.283940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.284223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.284230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.284520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.284526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.284816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.284823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.285130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.285137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.285327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.285333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.285645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.285652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.285969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.285976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.286121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.286128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.286424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.286431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.286593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.286599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.286632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.286639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.286956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.286963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.287335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.287342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.287509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.287515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.287880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.287886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.288173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.288180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.288455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.288461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.288646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.288652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.288948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.288954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.289304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.289311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.289605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.289612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.289805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.289812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.290158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.290166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.290347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.290354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.290756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.290762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.291060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.291068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.291250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.291256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.291633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-11-26 19:31:33.291639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-11-26 19:31:33.291938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.291945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.292283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.292290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.292677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.292683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.292997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.293004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.293170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.293177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.293554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.293561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.293860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.293867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.294171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.294177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.294537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.294544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.294699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.294705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.295094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.295107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.295274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.295281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.295590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.295597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.295889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.295896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.296181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.296188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.296518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.296525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.296687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.296693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.296868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.296874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.297182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.297189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.297566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.297572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.297820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.297826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.297863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-11-26 19:31:33.297869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-11-26 19:31:33.298153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.298159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.298487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.298494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.298680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.298687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.298987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.298994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.299269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.299276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.299443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.299450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.299755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.299762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.300076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.300083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.300289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.300296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.300670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.300677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.300844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.300851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.301205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.301212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.301405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.301411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.301714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.301721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.302024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.302030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.302359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.302367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.302563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.302570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.302846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.302852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.303167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.303173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.303332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.303339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.303558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.303564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.303734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.303741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.304088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.304095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.304281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.304288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.304444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.304451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.304754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.304761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.304920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.304926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.305258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.305265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.305434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.305440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-11-26 19:31:33.305771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-11-26 19:31:33.305777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.306090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.306096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.306471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.306478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.306664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.306671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.306839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.306846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.307036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.307043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.307273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.307281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.307543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.307549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.307711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.307718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.307904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.307911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.308096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.308104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.308407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.308414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.308691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.308697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.309004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.309011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.309348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.309355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.309514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.309521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.309748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.309755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.310086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.310093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.310371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.310379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.310685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.310692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.310868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.310875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.311205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.311212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.311521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.311528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.311732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.311738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.311917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.311923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.312125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.312131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.312504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.312512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.312696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.312703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.313011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.313018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-11-26 19:31:33.313355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-11-26 19:31:33.313362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.313563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.313569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.313861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.313867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.314054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.314060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.314403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.314410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.314447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.314453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.314617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.314624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.314939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.314945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.315171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.315178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.315497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.315504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.315715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.315722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.315969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.315975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.316264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.316272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.316599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.316606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.316803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.316810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.316926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.316933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.317203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.317210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.317553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.317559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.317874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.317881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.318048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.318055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.318388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.318395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.318729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.318736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.319094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.319103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.319424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.319430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.319606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.319613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.319980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.319987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.320159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.320167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.320469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-11-26 19:31:33.320476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-11-26 19:31:33.320788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.320795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.321018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.321025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.321341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.321349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.321692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.321699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.322052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.322058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.322368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.322375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.322526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.322533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.322773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.322779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.323148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.323155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.323485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.323493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.323793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.323800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.323858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.323864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.324217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.324223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.324390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.324397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.324814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.324821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.325127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.325135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.325457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.325463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.325815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.325822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.325992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.325999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.326270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.326277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.326610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.326617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.326897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.326904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.327088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.327095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.327285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.327292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.327594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.327600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.327798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.327804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.328019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.328026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.328312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.328319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-11-26 19:31:33.328641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-11-26 19:31:33.328647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.328819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.328826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.328879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.328885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.329184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.329191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.329483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.329489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.329792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.329798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.330173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.330180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.330360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.330367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.330698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.330705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-11-26 19:31:33.330873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-11-26 19:31:33.330880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.331211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.331219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.331531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.331538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.331712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.331719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.332085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.332092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.332293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.332301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.332562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.332569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.332875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.332882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.333180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.333187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.333405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.333411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.333760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.800 [2024-11-26 19:31:33.333767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.800 qpair failed and we were unable to recover it. 00:24:59.800 [2024-11-26 19:31:33.333936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.333943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.334277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.334286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.334610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.334617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.334789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.334795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.334944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.334951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.335135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.335142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.335452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.335459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.335632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.335639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.335813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.335820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.336062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.336068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.336489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.336495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.336679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.336686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.336972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.336979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.337155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.337162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.337502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.337509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.337879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.337885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.338058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.338065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.338384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.338391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.338746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.338752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.339029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.339036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.339233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.339239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.339404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.339410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.339698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.339705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.340024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.340031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.340349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.340356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.340661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.340668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.340858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.340865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.341134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.341141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.341502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.341509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.341809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.341815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.342024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.342030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.342355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.342362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.342691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.342698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.342981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.342987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.343380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.801 [2024-11-26 19:31:33.343387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.801 qpair failed and we were unable to recover it. 00:24:59.801 [2024-11-26 19:31:33.343679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.343686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.343955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.343961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.344129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.344136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.344428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.344435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.344597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.344604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.344951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.344958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.345285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.345293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.345568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.345574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.345801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.345809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.346163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.346170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.346400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.346406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.346650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.346657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.346832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.346839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.347179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.347185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.347365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.347372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.347809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.347816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.347968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.347974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.348207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.348214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.348250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.348256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.348531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.348538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.348920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.348927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.349186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.349193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.349471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.349478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.349788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.349795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.350063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.350070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.350254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.350261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.350628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.350634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.350922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.350929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.351228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.351236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.351573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.351580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.351880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.351887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.352200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.352207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.352371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.352378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.352741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.352748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.353050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.802 [2024-11-26 19:31:33.353057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.802 qpair failed and we were unable to recover it. 00:24:59.802 [2024-11-26 19:31:33.353381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.353388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.353680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.353686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.353989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.353995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.354162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.354169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.354531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.354537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.354824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.354830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.355047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.355053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.355330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.355337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.355624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.355631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.355841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.355847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.356052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.356058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.356342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.356350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.356526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.356532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.356741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.356748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.356984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.356991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.357193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.357200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.357475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.357482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.357838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.357845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.358132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.358139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.358539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.358545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.358694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.358701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.359023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.359030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.359143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.359150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.359416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.359423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.359727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.359733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.360046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.360052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.360339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.360346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.360528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.360536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.360851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.360857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.361173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.361180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.361213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.361219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.361519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.361526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.361822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.361829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.362152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.362159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.362369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.362376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.362572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.362578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.362749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.362756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.362953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.803 [2024-11-26 19:31:33.362959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.803 qpair failed and we were unable to recover it. 00:24:59.803 [2024-11-26 19:31:33.363249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.363256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.363647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.363654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.363966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.363973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.364261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.364268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.364424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.364431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.364768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.364775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.365079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.365086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.365389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.365396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.365432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.365438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.365632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.365639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.365824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.365831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.366010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.366020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.366224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.366231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.366529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.366537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.366735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.366741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.367062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.367068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.367374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.367381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.367703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.367710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.368025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.368031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.368292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.368299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.368607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.368613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.368902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.368909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.369069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.369075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.369371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.369378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.369709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.369716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.369892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.369898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.370102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.370109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.370271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.370278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.370558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.370564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.370733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.370740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.371024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.371031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.371330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.371337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.371520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.371527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.371880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.371887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.372060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.372067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.372227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.372234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.372537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.372543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.372707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.804 [2024-11-26 19:31:33.372713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.804 qpair failed and we were unable to recover it. 00:24:59.804 [2024-11-26 19:31:33.372976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.372983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.373169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.373176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.373369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.373376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.373685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.373692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.373862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.373868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.374205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.374212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.374504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.374511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.374818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.374824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.375139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.375146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.375216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.375222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.375552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.375559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.375733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.375740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.375888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.375895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.376108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.376116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.376402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.376409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.376749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.376757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.377048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.377054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.377353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.377360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.377517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.377523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.377889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.377895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.378031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.378038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.378256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.378263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.378639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.378646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.378977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.378984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.379275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.379282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.379595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.379602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.379912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.379940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.380276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.380283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.380443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.380449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.805 qpair failed and we were unable to recover it. 00:24:59.805 [2024-11-26 19:31:33.380695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.805 [2024-11-26 19:31:33.380702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.380910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.380917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.381219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.381226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.381390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.381396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.381677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.381683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.381860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.381866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.382192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.382199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.382545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.382552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.382854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.382861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.383175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.383182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.383544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.383551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.383865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.383872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.384037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.384044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.384087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.384094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.384432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.384439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.384765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.384772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.385060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.385066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.385379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.385386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.385786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.385792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.386115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.386122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.386289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.386295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.386526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.386533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.386693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.386699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.386933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.386939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.387271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.387278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.387582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.387589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.387744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.387753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.388055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.388062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.388348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.388355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.388652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.388659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.388832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.388838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.389087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.389093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.389252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.389259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.389565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.389572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.389767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.389777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.806 [2024-11-26 19:31:33.390098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.806 [2024-11-26 19:31:33.390108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.806 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.390414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.390421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.390585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.390592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.390805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.390811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.391132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.391139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.391335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.391342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.391619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.391626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.391896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.391903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.392112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.392119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.392283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.392290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.392466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.392472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.392786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.392792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.393075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.393081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.393235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.393241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.393426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.393432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.393718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.393725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.394046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.394053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.394220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.394227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.394539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.394546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.394876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.394882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.395124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.395130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.395314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.395321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.395695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.395702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.395876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.395883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.396055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.396061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.396435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.396442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.396726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.396733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.397033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.397040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.397356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.397363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.397756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.397763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.398043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.398049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.398207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.398214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.398368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.398375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.398665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.398672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.398986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.398993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.399295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.399302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.399355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.399361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.399638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.399644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.807 [2024-11-26 19:31:33.399956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.807 [2024-11-26 19:31:33.399963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.807 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.399995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.400001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.400366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.400373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.400659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.400666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.400959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.400966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.401299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.401306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.401609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.401616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.401921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.401928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.402232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.402240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.402401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.402408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.402709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.402716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.402990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.402997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.403188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.403195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.403564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.403570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.403877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.403884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.404179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.404186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.404489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.404495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.404714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.404720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.405062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.405068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.405365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.405372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.405693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.405702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.405854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.405861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.406074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.406081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.406247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.406254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.406576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.406583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.406918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.406925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.407228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.407235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.407498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.407504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.407543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.407549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.407885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.407891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.408180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.408187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.408462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.408469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.408667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.408673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.408955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.408962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.409123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.409131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.409474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.808 [2024-11-26 19:31:33.409482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.808 qpair failed and we were unable to recover it. 00:24:59.808 [2024-11-26 19:31:33.409796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.409804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.410125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.410132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.410325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.410332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.410510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.410517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.410805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.410812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.411127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.411134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.411458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.411465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.411639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.411645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.411997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.412003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.412169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.412176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.412462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.412469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.412802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.412809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.413089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.413096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.413432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.413440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.413746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.413753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.413929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.413936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.414115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.414123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.414471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.414478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.414803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.414809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.415088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.415095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.415280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.415287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.415655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.415663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.415957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.415964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.416357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.416364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.416643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.416652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.416830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.416837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.417029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.417036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.417336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.417344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.417514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.417520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.417638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.417645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.417877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.417884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.418167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.418174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.418479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.418486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.418842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.418848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.419023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.809 [2024-11-26 19:31:33.419030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.809 qpair failed and we were unable to recover it. 00:24:59.809 [2024-11-26 19:31:33.419374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.419381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.419552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.419559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.419824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.419831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.420142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.420150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.420464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.420471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.420753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.420759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.420937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.420945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.421175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.421182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.421386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.421393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.421695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.421703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.422026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.422033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.422207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.422214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.422391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.422398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.422558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.422564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.422874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.422881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.423172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.423179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.423419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.423426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.423702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.423708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.423871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.423879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.424314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.424321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.424656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.424663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.424989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.424995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.425362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.425369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.425704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.425711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.426038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.426045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.426355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.426362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.810 [2024-11-26 19:31:33.426722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.810 [2024-11-26 19:31:33.426729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.810 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.427064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.427071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.427112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.427120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.427412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.427420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.427722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.427729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.428062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.428069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.428372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.428379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.428550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.428558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.428893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.428901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.429184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.429191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.429368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.429376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.429600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.429607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.429909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.429917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.430089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.430096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.430450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.430457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.430785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.430792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.431116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.431123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.431462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.431469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.431761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.431768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.432115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.432123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.432405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.432412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.432726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.432733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.433032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.433039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.433342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.433349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.433682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.433689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.433995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.434001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.434188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.434195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.434572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.434579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.434928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.434935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.435099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.811 [2024-11-26 19:31:33.435115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.811 qpair failed and we were unable to recover it. 00:24:59.811 [2024-11-26 19:31:33.435402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.435409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.435571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.435577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.435751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.435758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.435952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.435960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.436241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.436249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.436559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.436566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.436876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.436882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.437175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.437182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.437503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.437510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.437679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.437686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.437866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.437873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.438037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.438044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.438319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.438325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.438622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.438631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.438998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.439005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.439368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.439376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.439715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.439722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.440002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.440008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.440188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.440195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.440552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.440559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.440725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.440732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.441039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.441046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.441349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.441355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.441539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.441545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.441772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.441779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.442095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.442104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.442383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.442390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.442701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.442708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.443008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.443015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.443337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.443344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.443508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.443515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.443700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.443707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.443900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.443907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.444214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.444221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.444598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.444605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.444981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.812 [2024-11-26 19:31:33.444988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.812 qpair failed and we were unable to recover it. 00:24:59.812 [2024-11-26 19:31:33.445346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.445353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.445672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.445678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.446009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.446015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.446318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.446325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.446671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.446678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.446822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.446829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.447113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.447120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.447355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.447362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.447771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.447778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.448078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.448084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.448256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.448263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.448611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.448617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.448908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.448915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.449220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.449227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.449580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.449586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.449759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.449765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.450088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.450095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.450400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.450409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.450760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.450767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.451062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.451069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.451258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.451265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.451546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.451552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.451733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.451740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.452010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.452016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.452332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.452339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.452492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.452499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.452760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.452767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.453070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.453077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.453384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.453391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.453778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.453785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.454074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.454081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.454369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.454376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.454549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.454555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.454848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.454855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.455159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-11-26 19:31:33.455166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-11-26 19:31:33.455490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.455497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.455537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.455543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.455886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.455892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.456160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.456167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.456360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.456367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.456702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.456708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.457016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.457023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.457342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.457349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.457517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.457523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.457885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.457892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.458064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.458071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.458384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.458391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.458438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.458445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.458813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.458820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.459024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.459031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.459223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.459229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.459386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.459392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.459670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.459676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.459904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.459910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.460039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.460045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.460325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.460332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.460667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.460674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.460958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.460966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.461262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.461269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.461466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.461473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.461800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.461807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.462113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.462120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.462308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.462314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.462555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.462562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.462912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.462919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.463118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.463125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.463297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.463304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.463667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.463673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.463843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-11-26 19:31:33.463850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-11-26 19:31:33.464154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.464161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.464520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.464528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.464873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.464879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.465174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.465181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.465356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.465362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.465530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.465536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.465890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.465897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.466145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.466152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.466328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.466335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.466696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.466703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.467021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.467028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.467346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.467353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.467528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.467536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.467801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.467808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.468096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.468105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.468331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.468338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.468657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.468664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.468824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.468831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.469042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.469048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.469364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.469371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.469670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.469676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.469972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.469978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.470292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.470299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.470477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.470484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.470695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.470702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.470865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.470872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.471061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.471068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.471401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.471408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.471590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.471598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.471766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.471774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-11-26 19:31:33.471940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-11-26 19:31:33.471947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.472126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.472133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.472440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.472446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.472748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.472754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.472834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.472840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.473044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.473050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.473234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.473241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.473551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.473557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.473751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.473758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.474071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.474078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.474376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.474383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.474560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.474567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.474813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.474820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.475151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.475158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.475339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.475345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.475500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.475507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.475806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.475813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.475978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.475984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.476180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.476186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.476525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.476532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.476714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.476721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.477036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.477042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.477435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.477442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.477829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.477836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.478117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.478124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.478459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.478466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.478623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.478630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.478975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.478982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.479394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.479400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.479703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.479709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.480053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.480060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.480367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.480373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.480722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.480729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.480885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.480892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.481179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.481186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-11-26 19:31:33.481491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-11-26 19:31:33.481498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.481790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.481796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.482107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.482114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.482391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.482400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.482720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.482727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.482945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.482952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.483270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.483277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.483600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.483606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.483776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.483782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.484126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.484132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.484357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.484363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.484618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.484625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.484925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.484932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.485275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.485283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.485458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.485464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.485765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.485771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.486056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.486063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.486255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.486262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.486300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.486306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.486669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.486676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.486841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.486847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.487057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.487063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.487422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.487429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.487718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.487725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.488014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.488021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.488317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.488323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.488485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.488491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.488868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.488874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.489168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.489175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.489494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.489501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.489826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.489832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.490225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.490232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.490588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.490594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.490750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.490756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.490982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.490988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.491262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.491269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.491583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-11-26 19:31:33.491590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-11-26 19:31:33.491882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.491889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.492082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.492089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.492250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.492257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.492607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.492613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.492783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.492790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.493105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.493112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.493458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.493466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.493787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.493794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.494084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.494091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.494387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.494394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.494732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.494739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.495024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.495031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.495336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.495343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.495629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.495636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.495935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.495942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.496300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.496307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.496458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.496464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.496787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.496793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.496998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.497004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.497196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.497203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.497565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.497571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.497792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.497799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.498109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.498116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.498462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.498469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.498632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.498638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.498975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.498981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.499276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.499283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.499502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.499512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.499845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.499851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.500052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.500059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.500257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.500265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.500605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.500611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.500651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.500657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.500824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.500831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.501118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-11-26 19:31:33.501125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-11-26 19:31:33.501326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.501333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.501689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.501695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.501990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.501996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.502167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.502173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.502439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.502445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.502742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.502749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.502918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.502924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.503136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.503143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.503434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.503441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.503630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.503636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.504034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.504041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.504246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.504255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.504638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.504645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.504953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.504959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.505289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.505296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.505473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.505479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.505832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.505839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.506043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.506049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.506259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.506266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.506425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.506432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.506697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.506704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.507020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.507027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.507255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.507262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.507623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.507630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.507922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.507929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.508227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.508235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.508383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.508390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.508680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.508687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.508748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.508754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.509051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.509057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.509371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-11-26 19:31:33.509378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-11-26 19:31:33.509528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.509534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.509921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.509928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.510257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.510264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.510571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.510578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.510885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.510893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.511031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.511039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.511264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.511272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.511484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.511490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.511785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.511791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.512131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.512138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.512441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.512449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.512756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.512763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.513053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.513060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.513376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.513383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.513706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.513713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.513770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.513776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.514091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.514098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.514453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.514460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.514615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.514622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.515010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.515016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.515312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.515321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.515489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.515495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.515700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.515707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.516069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.516075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.516284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.516291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.516594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.516601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.516747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-11-26 19:31:33.516754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-11-26 19:31:33.517017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.517024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.517313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.517320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.517684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.517690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.517991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.517997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.518175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.518182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.518523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.518529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.518827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.518834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.519036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.519043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.519239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.519246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.519612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.519618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.519909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.519915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.520227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.520234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.520566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.520573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.520848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.520855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.521167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.521174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.521358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.521365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.521571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.521578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.521884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.521891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.522074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.522081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.522397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.522404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.522749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.522756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.523075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.523081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.523439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.523446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.523695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.523701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.524008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.524015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.524335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.524342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.524677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.524685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.524860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.524867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.524917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.524923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.525086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.525093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-11-26 19:31:33.525248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-11-26 19:31:33.525255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.525535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.525543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.525837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.525843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.526136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.526144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.526476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.526483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.526814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.526821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.527242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.527249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.527410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.527417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.527605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.527611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.527912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.527918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.528223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.528231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.528403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.528410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.528690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.528697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.529002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.529009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.529172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.529179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.529490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.529497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.529807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.529813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.529980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.529987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.530353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.530360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.530670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.530676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.530858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.530864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.531201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.531208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.531370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.531376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.531690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.531697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.532016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.532023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.532177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.532184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.532530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.532536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.532606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.532612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.532888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.532894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.533180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.533186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.533511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.533518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.533816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.533823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.534116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.534122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-11-26 19:31:33.534319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-11-26 19:31:33.534326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.534695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.534701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.534773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.534779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.535085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.535092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.535402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.535409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.535795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.535802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.536125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.536132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.536416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.536423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.536730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.536737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.537025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.537032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.537346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.537354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.537656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.537663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.538027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.538033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.538070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.538077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.538377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.538384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.538686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.538693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.538987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.538994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.539307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.539314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.539673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.539680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.539973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.539979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.540228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.540235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.540512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.540518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.540814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.540821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.541124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.541131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.541327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.541334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.541659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.541666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.542009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.542016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.542323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.542329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.542627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.542633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.542945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.542952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.543126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.543133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.543479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.543485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.543656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.543662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-11-26 19:31:33.543853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-11-26 19:31:33.543859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.544017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.544023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.544331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.544339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.544507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.544513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.544848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.544856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.545150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.545157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.545455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.545462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.545746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.545753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.546066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.546073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.546374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.546381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.546534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.546541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.546889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.546896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.547204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.547211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.547383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.547390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.547548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.547554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.547613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.547619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.547969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.547975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.548280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.548287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.548459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.548466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.548785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.548791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.549089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.549095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.549392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.549399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.549720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.549727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.549876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.549883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.550196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.550203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.550370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.550377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.550742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.550749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.551028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.551035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.551376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.551383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.551546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.551553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.551813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.551820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.552139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.552146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.552457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.552463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.552758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.552765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.553062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.553068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-11-26 19:31:33.553369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-11-26 19:31:33.553376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.553675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.553682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.553854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.553862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.554195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.554203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.554512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.554519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.554816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.554822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.555014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.555021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.555368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.555376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.555530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.555537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.555857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.555868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.556057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.556064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.556380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.556387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.556572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.556579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.556925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.556931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.557277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.557284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.557580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.557586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.557781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.557788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.558125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.558132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.558430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.558437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.558735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.558741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.558908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.558915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.559283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.559290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.559618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.559624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.559907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.559914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.560091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.560098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.560421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.560428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.560600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.560606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.560788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.560794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.561115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.561122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.561283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.561290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-11-26 19:31:33.561673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-11-26 19:31:33.561680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.562011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.562018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.562166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.562175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.562452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.562459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.562632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.562638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.563005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.563012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.563303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.563310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.563600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.563607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.563774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.563782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.564142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.564149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.564302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.564309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.564462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.564469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.564767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.564774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.565093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.565102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.565258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.565265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.565449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.565456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.565722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.565728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.566061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.566067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.566242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.566249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.566472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.566480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.566656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.566663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.566847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.566854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.567036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.567043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.567361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.567369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.567662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.567669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.567841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.567848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.568165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.568173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.568472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.568479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.568880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.568887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.569198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.569205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.569509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.569516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.569808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.569816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.570121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.570129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.570282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.570289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.570476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.570483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-11-26 19:31:33.570789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-11-26 19:31:33.570796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.570947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.570954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.571118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.571125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.571339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.571346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.571705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.571712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.571986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.571994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.572192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.572198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.572535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.572542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.572907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.572913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.573220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.573227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.573298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.573305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.573576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.573583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.573623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.573630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.573913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.573921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.574301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.574308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.574624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.574631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.574792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.574800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.575174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.575181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.575458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.575465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.575757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.575763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.576060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.576067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.576103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.576109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.576285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.576292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.576622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.576629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.576785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.576794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.577165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.577173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.577356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.577363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.577583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.577590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.577915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.577921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.578094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.578105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-11-26 19:31:33.578478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-11-26 19:31:33.578484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.578764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.578771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.578960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.578966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.579166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.579174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.579260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.579266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.579578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.579584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.579886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.579893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.580060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.580067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.580256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.580264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.580555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.580563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.580754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.580761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.581075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.581082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.581411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.581419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.581696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.581703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.581874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.581881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.582115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.582122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.582398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.582405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.582721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.582728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.583004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.583011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.583397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.583405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.583705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.583711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.584103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.584110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.584401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.584409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.584711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.584717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.584878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.584884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.585143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.585150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.585464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.585471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.585762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.585768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.585936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.585942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.586267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.586274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.586455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.586462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.586713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.586721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.586761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.586767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.587075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.587082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.587391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-11-26 19:31:33.587400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-11-26 19:31:33.587693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.587700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.588025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.588032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.588351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.588358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.588681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.588687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.588984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.588990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.589155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.589162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.589498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.589505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.589647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.589654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.589988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.589995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.590179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.590186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.590356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.590363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.590700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.590706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.590986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.590993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.591173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.591180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.591333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.591340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.591655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.591662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.591835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.591842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.592172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.592179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.592383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.592390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.592702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.592708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.592883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.592891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.593059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.593066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.593345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.593351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.593505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.593511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.593876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.593883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.594202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.594209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.594381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.594388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.594562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.594568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.594863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.594870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.595150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.595157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.595484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.595491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.595683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.595690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.596019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.596026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.596186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.596193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.596549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.596555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.596760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.596767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.597094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-11-26 19:31:33.597103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-11-26 19:31:33.597388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.597395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.597703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.597709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.597862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.597870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.598086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.598092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.598457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.598464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.598626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.598632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.598989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.598996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.599171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.599178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.599493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.599499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.599665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.599671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.599854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.599861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.600043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.600050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.600344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.600351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.600674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.600680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.600852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.600858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.601215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.601222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.601501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.601508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.601811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.601817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.602126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.602133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.602509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.602516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.602795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.602802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.603115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.603123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.603291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.603298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.603588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.603595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.603775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.603782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.604142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.604149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.604326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.604332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.604661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.604668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.604994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-11-26 19:31:33.605001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-11-26 19:31:33.605321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.605328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.605614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.605621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.605835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.605841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.606151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.606159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.606460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.606467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.606767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.606774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.606958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.606965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.607295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.607303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.607478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.607484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.607658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.607664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.607965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.607973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.608274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.608281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.608600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.608607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.608925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.608935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.609212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.609219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.609380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.609387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.609701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.609708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.610007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.610014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.610319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.610326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.610614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.610621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.610848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.610854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.611180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.611186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.611499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.611506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.611651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.611658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.611975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.611982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.612253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.612260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.612567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.612573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.612729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.612735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.613094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.613102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.613415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.613421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.613710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.613718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.614121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.614129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.614316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.614324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-11-26 19:31:33.614482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-11-26 19:31:33.614489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.614674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.614680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.614973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.614981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.615276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.615284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.615638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.615645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.615940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.615947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.616097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.616108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.616299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.616306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.616628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.616634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.616776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.616783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.617141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.617148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.617448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.617455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.617635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.617642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.617790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.617797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.618010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.618017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.618383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.618390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.618707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.618714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.618872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.618878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.619264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.619272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.619593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.619599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.619887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.619896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.620074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.620081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.620313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.620320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.620630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.620637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.620739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.620745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.621088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.621095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.621315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.621322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.621493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.621500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.621687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.621694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.621850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.621857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.622205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.622213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.622521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.622528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.622827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.622833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.623141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.623148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.623333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.623341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.623623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.623629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.623786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.623793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-11-26 19:31:33.624113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-11-26 19:31:33.624120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.624429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.624435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.624734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.624740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.624919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.624925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.625127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.625134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.625267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.625274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.625657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.625664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.625952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.625958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.626132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.626142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.626469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.626476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.626798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.626805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.626949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.626955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.627242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.627249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.627290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.627296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.627600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.627606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.627814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.627821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.628030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.628037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.628336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.628343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.628512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.628519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.628562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.628568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.628816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.628823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.629023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.629030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.629200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.629207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.629544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.629553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.629742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.629748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.630044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.630051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.630349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.630356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.630660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.630667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.630854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.630861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.631214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.631221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.631550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.631557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.631729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.631736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.631910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.631917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.632090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.632096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.632451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.632458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.632579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.632586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.632961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.632968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.633267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.633274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-11-26 19:31:33.633447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-11-26 19:31:33.633456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.633770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.633777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.633948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.633955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.634268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.634275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.634553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.634560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.634867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.634874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.635191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.635199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.635506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.635513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.635814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.635820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.635857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.635863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.636217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.636224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.636437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.636443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.636632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.636639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.636920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.636927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.637105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.637112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.637295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.637302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.637479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.637485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.637675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.637682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.638051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.638058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.638212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.638219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.638573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.638580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.638708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.638715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.639001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.639007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.639319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.639327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.639605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.639613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.639815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.639824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.640148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.640156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.640464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.640471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.640733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.640739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.641041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.641048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.641351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.641358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.641643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.641649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.641959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.641966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.642250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.642257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.642461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.642468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-11-26 19:31:33.642812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-11-26 19:31:33.642820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.643107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.643115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.643413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.643421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.643711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.643718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.644038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.644046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.644219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.644226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.644502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.644509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.644663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.644670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.644893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.644900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.645295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.645302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.645616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.645623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.645914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.645921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.646212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.646220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.646556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.646563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.646727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.646734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.647001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.647008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.647180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.647188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.647425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.647433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.647602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.647609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.647815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.647821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.648129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.648137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-11-26 19:31:33.648480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-11-26 19:31:33.648487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.648794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.648803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.649089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.649096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.649422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.649430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.649736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.649743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.649790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.649796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.650096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.650108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.650264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.650271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.650579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.650586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.650747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.650756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.651017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.651024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.651334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.651341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.651655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.651663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.651977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.651984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.652153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.652160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.652361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.652368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.652526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.652533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.652719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.652727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.652894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.652902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.105 [2024-11-26 19:31:33.653123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.105 [2024-11-26 19:31:33.653131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.105 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.653433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.653441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.653796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.653804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.654105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.654113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.654267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.654274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.654562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.654570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.654860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.654867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.655156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.655164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.655534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.655541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.655829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.655837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.656015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.656022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.656440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.656449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.656749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.656757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.657109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.657117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.657398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.657406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.657544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.657552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.657840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.657848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.657981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.657988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.658334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.658341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.658658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.658665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.658975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.658983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.659321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.659329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.659493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.659500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.659697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.659705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.659998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.660005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.660162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.660169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.660449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.660456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.660615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.660623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.660947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.660955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.661248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.661256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.661441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.661453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.661770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.661778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.662079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.662086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.662439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.662447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.662764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.662771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.663074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.663081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.663311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.663318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.663636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.106 [2024-11-26 19:31:33.663644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.106 qpair failed and we were unable to recover it. 00:25:00.106 [2024-11-26 19:31:33.663821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.663829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.664161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.664169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.664436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.664444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.664739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.664746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.664905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.664913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.665097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.665107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.665289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.665296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.665495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.665502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.665850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.665857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.666140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.666148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.666472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.666479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.666657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.666665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.666899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.666906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.667192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.667200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.667578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.667586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.667902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.667910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.668203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.668210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.668515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.668522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.668799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.668806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.668979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.668987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.669359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.669367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.669661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.669668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.669976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.669983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.670345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.670352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.670513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.670520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.670704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.670711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.671037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.671044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.671337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.671345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.671646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.671654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.671850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.671857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.672181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.672189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.672491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.672498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.672794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.672803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.672970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.672978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.673346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.673353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.673502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.673509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.673812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.673819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.107 [2024-11-26 19:31:33.674005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.107 [2024-11-26 19:31:33.674012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.107 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.674295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.674303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.674640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.674647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.674926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.674933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.675243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.675251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.675559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.675566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.675879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.675886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.676168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.676175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.676351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.676359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.676580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.676588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.676921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.676928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.677227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.677235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.677502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.677509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.677792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.677800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.678105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.678113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.678427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.678434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.678675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.678682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.678822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.678829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.679058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.679066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.679224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.679231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.679438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.679445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.679716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.679724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.680095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.680106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.680409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.680416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.680734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.680741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.681026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.681034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.681347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.681355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.681664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.681671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.681988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.681995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.682210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.682218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.682562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.682569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.682747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.682754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.683066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.683075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.683236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.683244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.683507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.683515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.683822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.683829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.684003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.684014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.684293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.684301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.108 [2024-11-26 19:31:33.684628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.108 [2024-11-26 19:31:33.684635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.108 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.684802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.684810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.684945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.684952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.685115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.685123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.685485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.685492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.685784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.685791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.686134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.686142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.686504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.686512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.686682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.686690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.687009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.687017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.687312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.687320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.687599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.687606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.687843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.687851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.688035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.688042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.688227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.688234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.688525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.688532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.688766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.688773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.689065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.689072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.689388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.689395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.689701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.689708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.690063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.690070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.690396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.690404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.690439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.690446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.690768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.690776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.690930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.690939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.691277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.691284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.691529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.691536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.691867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.691875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.692220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.692227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.692377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.692384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.692605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.692612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.693003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.693010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.693337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.693344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.693508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.693515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.693555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.693564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.693756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.693763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.694064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.694071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.109 [2024-11-26 19:31:33.694385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.109 [2024-11-26 19:31:33.694392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.109 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.694690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.694697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.694998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.695005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.695164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.695171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.695555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.695562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.695736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.695743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.696059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.696066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.696220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.696227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.696525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.696531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.696872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.696879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.697045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.697052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.697220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.697227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.697417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.697424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.697723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.697730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.697789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.697795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.698072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.698079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.698388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.698395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.698696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.698703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.698994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.699001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.699306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.699313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.699473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.699480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.699652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.699658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.699943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.699949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.700150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.700157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.700351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.700358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.700662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.700669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.700964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.700971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.701004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.701012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.701356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.701364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.701652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.701658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.701997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.702004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.702310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.702317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.702503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.702510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.702700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.702707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.110 qpair failed and we were unable to recover it. 00:25:00.110 [2024-11-26 19:31:33.702782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.110 [2024-11-26 19:31:33.702788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.702837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.702844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.703213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.703220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.703430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.703436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.703630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.703637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.703990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.703997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.704202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.704209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.704592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.704599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.704695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.704701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.704866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.704872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.705240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.705246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.705411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.705417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.705776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.705783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.706090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.706097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.706478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.706485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.706775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.706782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.706976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.706983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.707298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.707305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.707614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.707620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.707919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.707926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.708242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.708249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.708443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.708450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.708759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.708766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.709066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.709072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.709366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.709373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.709671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.709678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.710081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.710087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.710122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.710129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.710427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.710434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.710596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.710603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.710913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.710919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.711236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.711243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.711606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.711613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.711908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.711917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.711960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.711967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.712255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.712262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.712439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.712446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.712798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.712805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.111 qpair failed and we were unable to recover it. 00:25:00.111 [2024-11-26 19:31:33.713014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.111 [2024-11-26 19:31:33.713021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.713307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.713314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.713633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.713640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.713946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.713953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.714326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.714333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.714503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.714509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.714699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.714705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.714992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.714999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.715365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.715372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.715737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.715744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.715905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.715912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.716209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.716215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.716563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.716569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.716882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.716889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.717200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.717206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.717420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.717427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.112 [2024-11-26 19:31:33.717725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.717733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:00.112 [2024-11-26 19:31:33.718031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.718038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.718149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.718156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.718296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.718303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:00.112 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:00.112 [2024-11-26 19:31:33.718641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.718650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.112 [2024-11-26 19:31:33.718843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.718851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.719168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.719175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.719359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.719366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.719531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.719538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.719802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.719810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.719872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.719879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.720061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.720068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.720397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.720404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.720706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.720713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.720877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.720884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.721037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.721044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.721282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.721289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.721451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.721460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.721765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.721772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.721944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.721951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.112 [2024-11-26 19:31:33.722147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.112 [2024-11-26 19:31:33.722156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.112 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.722471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.722479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.722674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.722683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.722865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.722873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.723218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.723226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.723547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.723554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.723600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.723607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.723733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.723740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.724046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.724053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.724254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.724260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.724574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.724581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.724933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.724940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.725283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.725291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.725646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.725654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.725925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.725932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.726090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.726097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.726496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.726504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.726805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.726811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.726852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.726858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.727172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.727179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.727456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.727463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.727796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.727804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.727987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.727995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.728318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.728325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.728479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.728486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.728760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.728767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.729059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.729065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.729432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.729441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.729817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.729824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.730144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.730151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.730476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.730485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.730769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.730777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.730939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.730946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.731110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.731118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.731428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.731434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.731737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.731744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.732005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.732012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.732412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.732421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.732717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.113 [2024-11-26 19:31:33.732724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.113 qpair failed and we were unable to recover it. 00:25:00.113 [2024-11-26 19:31:33.733048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.733056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.733451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.733459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.733750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.733757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.733935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.733942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.734249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.734258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.734578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.734586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.734907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.734914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.735232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.735240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.735548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.735555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.735593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.735600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.735964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.735972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.736290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.736297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.736458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.736465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.736711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.736718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.737050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.737438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.737445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.737742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.737750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.738103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.738111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.738263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.738270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.738578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.738585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.738742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.738750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.739038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.739045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.739316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.739324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.739644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.739652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.740032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.740040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.740216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.740223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.740463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.740471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.740766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.740773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.740955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.740962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.741139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.741146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.741484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.741491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.741670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.741677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.742003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.114 [2024-11-26 19:31:33.742011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.114 qpair failed and we were unable to recover it. 00:25:00.114 [2024-11-26 19:31:33.742196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.742203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.742564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.742572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.742743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.742751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.742942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.742949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.743246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.743254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.743562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.743570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.743754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.743761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.743967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.743974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.744134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.744142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.744359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.744365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.744678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.744686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.744865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.744871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.745020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.745027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.115 [2024-11-26 19:31:33.745344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.745353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.115 [2024-11-26 19:31:33.745528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.745537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.115 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.115 [2024-11-26 19:31:33.745870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.745878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.746198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.746204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.746362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.746369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.746514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.746521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.746711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.746718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.747074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.747080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.747363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.747370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.747673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.747680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.747981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.747988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.748291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.748298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.748472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.748478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.748843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.748849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.749104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.749111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.749432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.749439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.749744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.749751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.750044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.750051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.750246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.750253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.750410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.750417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.750723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.750729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.751053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.751059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.115 [2024-11-26 19:31:33.751341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.115 [2024-11-26 19:31:33.751348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.115 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.751633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.751640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.751934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.751940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.752191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.752198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.752524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.752531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.752831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.752838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.753143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.753150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.753457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.753464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.753752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.753761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.753935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.753942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.754315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.754322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.754630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.754637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.754819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.754826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.755133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.755140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.755461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.755467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.755763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.755770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.755803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.755809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.756087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.756093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.756272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.756278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.756501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.756508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.756806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.756813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.756968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.756974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.757310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.757317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.757590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.757597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.757776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.757786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.758095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.758105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.758297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.758304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.758566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.758573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.758728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.758735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.758895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.758902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.759069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.759076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.759351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.759358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.759679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.759686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.759985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.759992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.760283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.760290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.760477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.760484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.760819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.760825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.761104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.761111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-11-26 19:31:33.761428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-11-26 19:31:33.761435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.761740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.761746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.761940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.761946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.762306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.762313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.762594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.762600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.762879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.762886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.763171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.763178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.763557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.763565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.763850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.763857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.764019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.764026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.764186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.764195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.764484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.764491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.764781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.764788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.765093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.765103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.765327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.765334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.765654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.765661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.765824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.765831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.766198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.766205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.766389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.766397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.766706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.766713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.767028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.767035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.767200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.767207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.767431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.767438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.767737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.767744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.768064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.768071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.768264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.768271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.768480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.768487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.768817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.768824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.769108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.769115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.769311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.769318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.769628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.769635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.769961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.769968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.770253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.770260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.770534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.770541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.770731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.770738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.771077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.771083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.771284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.771291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-11-26 19:31:33.771500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-11-26 19:31:33.771506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.771560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.771566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.771733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.771739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 Malloc0 00:25:00.118 [2024-11-26 19:31:33.772082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.772089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.772482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.772490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.118 [2024-11-26 19:31:33.772819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.772826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:00.118 [2024-11-26 19:31:33.773174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.773181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.118 [2024-11-26 19:31:33.773358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.773366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.118 [2024-11-26 19:31:33.773659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.773666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.773980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.773986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.774062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.774068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.774378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.774385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.774664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.774671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.774886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.774893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.775226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.775233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.775534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.775541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.775857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.775864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.776017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.776023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.776200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.776207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.776424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.776430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.776711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.776718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.776891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.776898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.777094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.777103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.777403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.777410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.777706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.777712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.778016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.778023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.778244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.778251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.778433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.778439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.778582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.778589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.778783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.778790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.779074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.779081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.779261] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.118 [2024-11-26 19:31:33.779457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.779464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.779502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.779509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.779817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.779824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.780120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.780127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.780458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.780465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-11-26 19:31:33.780634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-11-26 19:31:33.780640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.780849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.780856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.781044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.781051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.781323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.781329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.781654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.781660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.781841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.781848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.782158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.782164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.782355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.782362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.782546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.782552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.782869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.782875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.783035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.783043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.783396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.783403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.783706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.783712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.784046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.784052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.784372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.784379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.784674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.784682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.785061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.785068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.785367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.785374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.785651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.785658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.785821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.785827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.786138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.786144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.786475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.786482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.786715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.786722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.787007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.787013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.787196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.787204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.787579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.787586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.119 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.119 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.119 [2024-11-26 19:31:33.787919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.787927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.119 [2024-11-26 19:31:33.788264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.788272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.788618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.788625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.788916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.788923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.789249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.789256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.789446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.789453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.789495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-11-26 19:31:33.789501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-11-26 19:31:33.789653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.789660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.790040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.790047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.790419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.790426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.790709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.790715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.790898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.790905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.791140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.791147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.791479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.791486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.791795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.791801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.792070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.792077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.792433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.792440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.792773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.792780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.793028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.793034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.793347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.793354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.793672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.793678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.793980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.793986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.794332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.794339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.794385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.794392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.794580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.794587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.794902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.794909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.795222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.795229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.795636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.795644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.795802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.795809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.120 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.120 [2024-11-26 19:31:33.796110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.796118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.120 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.120 [2024-11-26 19:31:33.796490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.796496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.796837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.796843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.797050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.797057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.797273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.797279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.797658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.797665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.797998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.798004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.798364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.798371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.798703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.798709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.798849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.798855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.799078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.799084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.799274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.799281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.799490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.799498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.799830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.799837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-11-26 19:31:33.800155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-11-26 19:31:33.800161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.800364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.800371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.800627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.800633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.800981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.800987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.801328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.801335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.801533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.801539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.801814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.801821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.802105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.802112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.802402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.802408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.802598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.802606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.802836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.802843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.803150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.803157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.803387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.803394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.803800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.803807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.121 [2024-11-26 19:31:33.804130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.804138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.121 [2024-11-26 19:31:33.804437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.804444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.804824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.804831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.805103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.805110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.805280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.805287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.805611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.805617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.805777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.805784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.806142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.806149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.806458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.806465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.806749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.806755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.807087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.807093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.807457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-11-26 19:31:33.807464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.807503] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.121 [2024-11-26 19:31:33.810026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.121 [2024-11-26 19:31:33.810134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.121 [2024-11-26 19:31:33.810148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.121 [2024-11-26 19:31:33.810154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.121 [2024-11-26 19:31:33.810159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.121 [2024-11-26 19:31:33.810174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.121 [2024-11-26 19:31:33.819894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.121 [2024-11-26 19:31:33.819944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.121 [2024-11-26 19:31:33.819954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.121 [2024-11-26 19:31:33.819959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.121 [2024-11-26 19:31:33.819964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.121 19:31:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3903864 00:25:00.121 [2024-11-26 19:31:33.819977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-11-26 19:31:33.829928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.121 [2024-11-26 19:31:33.830013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.121 [2024-11-26 19:31:33.830025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.121 [2024-11-26 19:31:33.830030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.121 [2024-11-26 19:31:33.830034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.121 [2024-11-26 19:31:33.830045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.839939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.840003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.840014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.840018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.840023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.840033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.849893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.849991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.850003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.850008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.850012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.850023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.859895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.859940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.859950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.859957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.859962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.859972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.869918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.869967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.869977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.869982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.869987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.869996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.879828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.879886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.879895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.879900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.879905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.879915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.889992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.890089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.890098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.890107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.890111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.890122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.900011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.900073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.900083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.900088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.900092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.900105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.910046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.910096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.910112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.910118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.910122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.910132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.919946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.920042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.920051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.920057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.920062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.920073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.930086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.930138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.930148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.930153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.930158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.930168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.940122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.940174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.940184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.940188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.940193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.940203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.950120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.950177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.950186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.950196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.950200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.950210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-11-26 19:31:33.960187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.122 [2024-11-26 19:31:33.960237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.122 [2024-11-26 19:31:33.960247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.122 [2024-11-26 19:31:33.960252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.122 [2024-11-26 19:31:33.960256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.122 [2024-11-26 19:31:33.960266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.384 [2024-11-26 19:31:33.970202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.384 [2024-11-26 19:31:33.970248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.384 [2024-11-26 19:31:33.970257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.384 [2024-11-26 19:31:33.970262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.384 [2024-11-26 19:31:33.970266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.384 [2024-11-26 19:31:33.970276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-11-26 19:31:33.980207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.384 [2024-11-26 19:31:33.980257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.384 [2024-11-26 19:31:33.980267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.384 [2024-11-26 19:31:33.980271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.384 [2024-11-26 19:31:33.980276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.384 [2024-11-26 19:31:33.980286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-11-26 19:31:33.990266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.384 [2024-11-26 19:31:33.990314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.384 [2024-11-26 19:31:33.990323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.384 [2024-11-26 19:31:33.990328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.384 [2024-11-26 19:31:33.990332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.384 [2024-11-26 19:31:33.990342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-11-26 19:31:34.000175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.384 [2024-11-26 19:31:34.000229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.384 [2024-11-26 19:31:34.000238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.384 [2024-11-26 19:31:34.000243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.384 [2024-11-26 19:31:34.000248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.384 [2024-11-26 19:31:34.000257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-11-26 19:31:34.010354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.384 [2024-11-26 19:31:34.010417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.384 [2024-11-26 19:31:34.010427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.384 [2024-11-26 19:31:34.010432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.384 [2024-11-26 19:31:34.010436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.384 [2024-11-26 19:31:34.010446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-11-26 19:31:34.020344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.384 [2024-11-26 19:31:34.020395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.384 [2024-11-26 19:31:34.020404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.384 [2024-11-26 19:31:34.020409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.384 [2024-11-26 19:31:34.020413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.020423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.030251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.030295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.030305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.030309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.030314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.030324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.040428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.040482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.040491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.040496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.040500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.040510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.050330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.050385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.050394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.050399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.050403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.050413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.060452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.060498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.060508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.060512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.060517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.060526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.070580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.070633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.070642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.070647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.070651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.070661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.080589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.080653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.080663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.080671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.080675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.080685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.090590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.090641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.090651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.090656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.090660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.090670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.100642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.100703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.100712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.100717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.100721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.100731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.110454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.110506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.110515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.110520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.110524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.110534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.120668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.120720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.120730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.120735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.120739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.120751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.130662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.130733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.130743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.130748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.130752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.130762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.140535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.140578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.140587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.140592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.140597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.140607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-11-26 19:31:34.150702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.385 [2024-11-26 19:31:34.150752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.385 [2024-11-26 19:31:34.150762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.385 [2024-11-26 19:31:34.150767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.385 [2024-11-26 19:31:34.150771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.385 [2024-11-26 19:31:34.150781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.160731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.160785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.160795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.160800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.160804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.160814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.170784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.170871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.170881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.170885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.170890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.170899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.180777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.180826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.180835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.180840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.180844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.180854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.190804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.190855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.190864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.190869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.190874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.190883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.200830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.200891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.200900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.200905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.200909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.200919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.210743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.210795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.210806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.210812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.210816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.210826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.220901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.220949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.220968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.220974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.220980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.220994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.230908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.230959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.230970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.230976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.230981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.230992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-11-26 19:31:34.240946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.386 [2024-11-26 19:31:34.240996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.386 [2024-11-26 19:31:34.241006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.386 [2024-11-26 19:31:34.241012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.386 [2024-11-26 19:31:34.241016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.386 [2024-11-26 19:31:34.241028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.647 [2024-11-26 19:31:34.250958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.647 [2024-11-26 19:31:34.251014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.647 [2024-11-26 19:31:34.251024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.647 [2024-11-26 19:31:34.251029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.647 [2024-11-26 19:31:34.251037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.647 [2024-11-26 19:31:34.251048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.647 qpair failed and we were unable to recover it. 00:25:00.647 [2024-11-26 19:31:34.260994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.647 [2024-11-26 19:31:34.261076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.647 [2024-11-26 19:31:34.261085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.647 [2024-11-26 19:31:34.261091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.647 [2024-11-26 19:31:34.261095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.647 [2024-11-26 19:31:34.261109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.647 qpair failed and we were unable to recover it. 00:25:00.647 [2024-11-26 19:31:34.271013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.647 [2024-11-26 19:31:34.271065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.647 [2024-11-26 19:31:34.271075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.647 [2024-11-26 19:31:34.271080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.647 [2024-11-26 19:31:34.271084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.647 [2024-11-26 19:31:34.271094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.647 qpair failed and we were unable to recover it. 00:25:00.647 [2024-11-26 19:31:34.281059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.647 [2024-11-26 19:31:34.281112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.647 [2024-11-26 19:31:34.281122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.647 [2024-11-26 19:31:34.281127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.647 [2024-11-26 19:31:34.281132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.647 [2024-11-26 19:31:34.281142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.647 qpair failed and we were unable to recover it. 00:25:00.647 [2024-11-26 19:31:34.291091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.647 [2024-11-26 19:31:34.291147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.647 [2024-11-26 19:31:34.291158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.647 [2024-11-26 19:31:34.291163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.647 [2024-11-26 19:31:34.291167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.647 [2024-11-26 19:31:34.291178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.647 qpair failed and we were unable to recover it. 00:25:00.647 [2024-11-26 19:31:34.301115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.647 [2024-11-26 19:31:34.301163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.647 [2024-11-26 19:31:34.301173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.647 [2024-11-26 19:31:34.301179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.647 [2024-11-26 19:31:34.301183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.647 [2024-11-26 19:31:34.301193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.647 qpair failed and we were unable to recover it. 00:25:00.647 [2024-11-26 19:31:34.311143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.311192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.311201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.311206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.311211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.311221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.321063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.321161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.321170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.321176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.321181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.321191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.331185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.331286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.331296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.331301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.331306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.331315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.341194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.341244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.341257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.341262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.341267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.341277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.351236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.351287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.351297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.351302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.351307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.351317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.361278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.361359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.361368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.361374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.361378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.361388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.371327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.371375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.371384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.371389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.371394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.371404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.381347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.381433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.381443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.381448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.381455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.381465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.391353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.391397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.391407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.391412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.391417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.391427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.401350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.401401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.401410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.401416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.401420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.401430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.411410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.411461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.411470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.411475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.411480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.411489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.421309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.421361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.421371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.421376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.421380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.648 [2024-11-26 19:31:34.421390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.648 qpair failed and we were unable to recover it. 00:25:00.648 [2024-11-26 19:31:34.431468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.648 [2024-11-26 19:31:34.431523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.648 [2024-11-26 19:31:34.431532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.648 [2024-11-26 19:31:34.431537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.648 [2024-11-26 19:31:34.431542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.649 [2024-11-26 19:31:34.431552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.649 qpair failed and we were unable to recover it. 00:25:00.649 [2024-11-26 19:31:34.441463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.649 [2024-11-26 19:31:34.441515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.649 [2024-11-26 19:31:34.441525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.649 [2024-11-26 19:31:34.441530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.649 [2024-11-26 19:31:34.441534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.649 [2024-11-26 19:31:34.441544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.649 qpair failed and we were unable to recover it. 00:25:00.649 [2024-11-26 19:31:34.451534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.649 [2024-11-26 19:31:34.451591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.649 [2024-11-26 19:31:34.451601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.649 [2024-11-26 19:31:34.451606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.649 [2024-11-26 19:31:34.451611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.649 [2024-11-26 19:31:34.451620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.649 qpair failed and we were unable to recover it. 00:25:00.649 [2024-11-26 19:31:34.461421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.649 [2024-11-26 19:31:34.461466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.649 [2024-11-26 19:31:34.461475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.649 [2024-11-26 19:31:34.461481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.649 [2024-11-26 19:31:34.461485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.649 [2024-11-26 19:31:34.461495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.649 qpair failed and we were unable to recover it. 00:25:00.649 [2024-11-26 19:31:34.471575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.649 [2024-11-26 19:31:34.471624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.649 [2024-11-26 19:31:34.471637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.649 [2024-11-26 19:31:34.471642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.649 [2024-11-26 19:31:34.471646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.649 [2024-11-26 19:31:34.471656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.649 qpair failed and we were unable to recover it. 00:25:00.649 [2024-11-26 19:31:34.481645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.649 [2024-11-26 19:31:34.481695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.649 [2024-11-26 19:31:34.481704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.649 [2024-11-26 19:31:34.481709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.649 [2024-11-26 19:31:34.481714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.649 [2024-11-26 19:31:34.481724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.649 qpair failed and we were unable to recover it. 00:25:00.649 [2024-11-26 19:31:34.491612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.649 [2024-11-26 19:31:34.491661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.649 [2024-11-26 19:31:34.491670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.649 [2024-11-26 19:31:34.491676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.649 [2024-11-26 19:31:34.491680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.649 [2024-11-26 19:31:34.491690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.649 qpair failed and we were unable to recover it. 00:25:00.649 [2024-11-26 19:31:34.501525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.649 [2024-11-26 19:31:34.501578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.649 [2024-11-26 19:31:34.501587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.649 [2024-11-26 19:31:34.501593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.649 [2024-11-26 19:31:34.501597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.649 [2024-11-26 19:31:34.501607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.649 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.511553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.511605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.511616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.511624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.511630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.511640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.521712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.521760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.521770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.521775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.521780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.521790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.531747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.531794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.531804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.531809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.531814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.531824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.541741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.541795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.541805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.541810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.541815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.541824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.551784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.551831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.551840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.551846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.551851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.551864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.561819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.561871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.561881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.561886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.561891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.561901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.571907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.571959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.571968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.571974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.571978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.571988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.581870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.581925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.581934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.581940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.581944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.581955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.591898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.591943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.591953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.591958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.591962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.591972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.910 [2024-11-26 19:31:34.601908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.910 [2024-11-26 19:31:34.601961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.910 [2024-11-26 19:31:34.601971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.910 [2024-11-26 19:31:34.601976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.910 [2024-11-26 19:31:34.601981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.910 [2024-11-26 19:31:34.601991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.910 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.611951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.612003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.612012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.612018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.612022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.612032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.621910] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.621959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.621968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.621974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.621978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.621989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.631960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.632003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.632013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.632018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.632023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.632033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.641983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.642024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.642033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.642040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.642045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.642055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.652051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.652098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.652110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.652116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.652120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.652130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.662071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.662112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.662122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.662127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.662132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.662142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.672014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.672066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.672075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.672080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.672085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.672095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.682060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.682109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.682119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.682124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.682129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.682145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.692170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.692245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.692254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.692259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.692264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.692274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.702158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.702200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.702210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.702215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.702220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.702230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.712226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.712284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.712294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.712299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.712303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.712314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.722084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.722141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.722152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.722158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.722162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.722173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.732261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.732307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.732317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.732322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.732327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.732337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.742307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.742405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.742415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.742421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.742425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.742436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.752318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.752367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.752376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.752382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.752386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.752396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.762186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.762225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.762235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.762240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.762245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.762255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:00.911 [2024-11-26 19:31:34.772317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:00.911 [2024-11-26 19:31:34.772402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:00.911 [2024-11-26 19:31:34.772414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:00.911 [2024-11-26 19:31:34.772420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:00.911 [2024-11-26 19:31:34.772425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:00.911 [2024-11-26 19:31:34.772435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.911 qpair failed and we were unable to recover it. 00:25:01.171 [2024-11-26 19:31:34.782403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-11-26 19:31:34.782446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-11-26 19:31:34.782456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-11-26 19:31:34.782461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-11-26 19:31:34.782466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.171 [2024-11-26 19:31:34.782475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-11-26 19:31:34.792434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-11-26 19:31:34.792487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-11-26 19:31:34.792497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.171 [2024-11-26 19:31:34.792502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.171 [2024-11-26 19:31:34.792507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.171 [2024-11-26 19:31:34.792517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.171 qpair failed and we were unable to recover it. 00:25:01.171 [2024-11-26 19:31:34.802427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.171 [2024-11-26 19:31:34.802467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.171 [2024-11-26 19:31:34.802477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.802482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.802487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.802497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.812456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.812496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.812505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.812510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.812518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.812528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.822506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.822554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.822564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.822569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.822574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.822584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.832416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.832469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.832480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.832485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.832490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.832500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.842522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.842564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.842574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.842580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.842584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.842595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.852491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.852546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.852556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.852561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.852566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.852576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.862478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.862526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.862535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.862541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.862545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.862555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.872643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.872685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.872695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.872700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.872704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.872714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.882617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.882670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.882698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.882704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.882709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.882725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.892669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.892714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.892725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.892730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.892735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.892746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.902712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.902767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.902779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.902784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.902789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.902800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.912739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.912824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.912834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.912840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.912845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.172 [2024-11-26 19:31:34.912855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.172 qpair failed and we were unable to recover it. 00:25:01.172 [2024-11-26 19:31:34.922770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.172 [2024-11-26 19:31:34.922845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.172 [2024-11-26 19:31:34.922854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.172 [2024-11-26 19:31:34.922860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.172 [2024-11-26 19:31:34.922864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:34.922874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:34.932763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:34.932805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:34.932814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:34.932820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:34.932824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:34.932835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:34.942815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:34.942859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:34.942869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:34.942874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:34.942881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:34.942892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:34.952866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:34.952907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:34.952917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:34.952922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:34.952927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:34.952937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:34.962714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:34.962754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:34.962765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:34.962771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:34.962776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:34.962786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:34.972879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:34.972920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:34.972930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:34.972936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:34.972941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:34.972951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:34.982979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:34.983021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:34.983031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:34.983036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:34.983041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:34.983051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:34.992963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:34.993013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:34.993023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:34.993029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:34.993033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:34.993043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:35.002960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:35.003003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:35.003012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:35.003018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:35.003022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:35.003033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:35.012961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:35.013000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:35.013009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:35.013015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:35.013020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:35.013030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:35.023046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:35.023099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:35.023111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:35.023116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:35.023121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:35.023131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.173 [2024-11-26 19:31:35.033067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.173 [2024-11-26 19:31:35.033117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.173 [2024-11-26 19:31:35.033129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.173 [2024-11-26 19:31:35.033134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.173 [2024-11-26 19:31:35.033139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.173 [2024-11-26 19:31:35.033149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.173 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.043073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.043114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.043124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.043129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.043134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.043144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.052965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.053008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.053018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.053023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.053028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.053038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.063131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.063168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.063177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.063182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.063187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.063197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.073177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.073232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.073242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.073250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.073255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.073265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.083158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.083198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.083208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.083213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.083218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.083228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.093246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.093310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.093320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.093325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.093329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.093339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.103246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.103292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.103302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.103307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.103312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.103322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.113165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.113207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.113216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.113222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.113226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.113240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.123304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.123349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.123358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.123364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.123368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.123378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.133295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.133336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.133345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.133351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.133355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.133365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.143206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.143244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.143254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.143260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.143265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.143275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.153383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.153441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.153451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.153456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.153461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.153471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.163407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.163498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.163508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.163513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.163517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.163528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.173453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.173494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.173503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.173508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.173513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.173523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.183445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.183482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.183491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.183496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.435 [2024-11-26 19:31:35.183501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.435 [2024-11-26 19:31:35.183511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.435 qpair failed and we were unable to recover it. 00:25:01.435 [2024-11-26 19:31:35.193460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.435 [2024-11-26 19:31:35.193498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.435 [2024-11-26 19:31:35.193507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.435 [2024-11-26 19:31:35.193513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.193517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.193527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.203519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.203567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.203576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.203584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.203589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.203599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.213546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.213592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.213602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.213607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.213612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.213622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.223562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.223600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.223610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.223615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.223620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.223630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.233615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.233693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.233703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.233708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.233713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.233723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.243471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.243544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.243554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.243560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.243564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.243578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.253643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.253686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.253696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.253702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.253706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.253717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.263648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.263694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.263703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.263709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.263713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.263723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.273541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.273582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.273592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.273597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.273602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.273612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.283709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.283750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.283760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.283765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.283770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.283780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.436 [2024-11-26 19:31:35.293749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.436 [2024-11-26 19:31:35.293792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.436 [2024-11-26 19:31:35.293802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.436 [2024-11-26 19:31:35.293807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.436 [2024-11-26 19:31:35.293812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.436 [2024-11-26 19:31:35.293822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.436 qpair failed and we were unable to recover it. 00:25:01.697 [2024-11-26 19:31:35.303761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.697 [2024-11-26 19:31:35.303799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.697 [2024-11-26 19:31:35.303809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.697 [2024-11-26 19:31:35.303814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.697 [2024-11-26 19:31:35.303819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.697 [2024-11-26 19:31:35.303829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.697 qpair failed and we were unable to recover it. 00:25:01.697 [2024-11-26 19:31:35.313766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.697 [2024-11-26 19:31:35.313804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.697 [2024-11-26 19:31:35.313814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.697 [2024-11-26 19:31:35.313819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.697 [2024-11-26 19:31:35.313823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.697 [2024-11-26 19:31:35.313833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.697 qpair failed and we were unable to recover it. 00:25:01.697 [2024-11-26 19:31:35.323813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.697 [2024-11-26 19:31:35.323853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.697 [2024-11-26 19:31:35.323863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.697 [2024-11-26 19:31:35.323869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.697 [2024-11-26 19:31:35.323873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.697 [2024-11-26 19:31:35.323884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.697 qpair failed and we were unable to recover it. 00:25:01.697 [2024-11-26 19:31:35.333875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.697 [2024-11-26 19:31:35.333920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.697 [2024-11-26 19:31:35.333932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.697 [2024-11-26 19:31:35.333937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.697 [2024-11-26 19:31:35.333942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.697 [2024-11-26 19:31:35.333952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.697 qpair failed and we were unable to recover it. 00:25:01.697 [2024-11-26 19:31:35.343726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.697 [2024-11-26 19:31:35.343768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.697 [2024-11-26 19:31:35.343779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.697 [2024-11-26 19:31:35.343784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.697 [2024-11-26 19:31:35.343789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.697 [2024-11-26 19:31:35.343799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.697 qpair failed and we were unable to recover it. 00:25:01.697 [2024-11-26 19:31:35.353753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.697 [2024-11-26 19:31:35.353796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.697 [2024-11-26 19:31:35.353807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.697 [2024-11-26 19:31:35.353812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.353816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.353826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.363965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.364032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.364042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.364048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.364052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.364063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.373957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.374013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.374023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.374028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.374036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.374046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.383964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.384008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.384018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.384024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.384029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.384039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.393866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.393926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.393936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.393941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.393946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.393956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.403934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.403985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.403994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.403999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.404004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.404014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.414079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.414126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.414135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.414141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.414146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.414156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.424059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.424113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.424123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.424128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.424133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.424143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.434119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.434156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.434166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.434171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.434176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.434186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.444107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.444151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.444160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.444165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.444170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.444181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.454047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.454114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.454124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.454129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.454134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.454144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.464193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.464238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.464250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.464255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.464260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.464270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.474232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.474273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.474283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.474289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.474293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.698 [2024-11-26 19:31:35.474304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.698 qpair failed and we were unable to recover it. 00:25:01.698 [2024-11-26 19:31:35.484251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.698 [2024-11-26 19:31:35.484293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.698 [2024-11-26 19:31:35.484303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.698 [2024-11-26 19:31:35.484309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.698 [2024-11-26 19:31:35.484314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.699 [2024-11-26 19:31:35.484323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.699 qpair failed and we were unable to recover it. 00:25:01.699 [2024-11-26 19:31:35.494149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.699 [2024-11-26 19:31:35.494190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.699 [2024-11-26 19:31:35.494199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.699 [2024-11-26 19:31:35.494205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.699 [2024-11-26 19:31:35.494209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.699 [2024-11-26 19:31:35.494219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.699 qpair failed and we were unable to recover it. 00:25:01.699 [2024-11-26 19:31:35.504309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.699 [2024-11-26 19:31:35.504350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.699 [2024-11-26 19:31:35.504360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.699 [2024-11-26 19:31:35.504365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.699 [2024-11-26 19:31:35.504372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.699 [2024-11-26 19:31:35.504382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.699 qpair failed and we were unable to recover it. 00:25:01.699 [2024-11-26 19:31:35.514315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.699 [2024-11-26 19:31:35.514373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.699 [2024-11-26 19:31:35.514382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.699 [2024-11-26 19:31:35.514387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.699 [2024-11-26 19:31:35.514392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.699 [2024-11-26 19:31:35.514402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.699 qpair failed and we were unable to recover it. 00:25:01.699 [2024-11-26 19:31:35.524357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.699 [2024-11-26 19:31:35.524399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.699 [2024-11-26 19:31:35.524409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.699 [2024-11-26 19:31:35.524414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.699 [2024-11-26 19:31:35.524418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.699 [2024-11-26 19:31:35.524428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.699 qpair failed and we were unable to recover it. 00:25:01.699 [2024-11-26 19:31:35.534254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.699 [2024-11-26 19:31:35.534297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.699 [2024-11-26 19:31:35.534307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.699 [2024-11-26 19:31:35.534312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.699 [2024-11-26 19:31:35.534316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.699 [2024-11-26 19:31:35.534326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.699 qpair failed and we were unable to recover it. 00:25:01.699 [2024-11-26 19:31:35.544395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.699 [2024-11-26 19:31:35.544444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.699 [2024-11-26 19:31:35.544454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.699 [2024-11-26 19:31:35.544459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.699 [2024-11-26 19:31:35.544464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.699 [2024-11-26 19:31:35.544474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.699 qpair failed and we were unable to recover it. 00:25:01.699 [2024-11-26 19:31:35.554285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.699 [2024-11-26 19:31:35.554324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.699 [2024-11-26 19:31:35.554333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.699 [2024-11-26 19:31:35.554339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.699 [2024-11-26 19:31:35.554344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.699 [2024-11-26 19:31:35.554354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.699 qpair failed and we were unable to recover it. 00:25:01.960 [2024-11-26 19:31:35.564463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.960 [2024-11-26 19:31:35.564505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.960 [2024-11-26 19:31:35.564514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.960 [2024-11-26 19:31:35.564519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.960 [2024-11-26 19:31:35.564524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.960 [2024-11-26 19:31:35.564534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-11-26 19:31:35.574493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.960 [2024-11-26 19:31:35.574535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.960 [2024-11-26 19:31:35.574544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.960 [2024-11-26 19:31:35.574550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.960 [2024-11-26 19:31:35.574555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.960 [2024-11-26 19:31:35.574565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-11-26 19:31:35.584513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.960 [2024-11-26 19:31:35.584555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.960 [2024-11-26 19:31:35.584565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.584571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.584575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.584586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.594550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.594586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.594598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.594604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.594609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.594618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.604567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.604613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.604622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.604627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.604632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.604642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.614591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.614637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.614647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.614652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.614657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.614667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.624621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.624702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.624711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.624716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.624721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.624731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.634614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.634650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.634660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.634667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.634672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.634682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.644680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.644731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.644740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.644745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.644750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.644760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.654710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.654750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.654759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.654765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.654769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.654779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.664718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.664755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.664765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.664770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.664775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.664785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.674745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.674820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.674830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.961 [2024-11-26 19:31:35.674835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.961 [2024-11-26 19:31:35.674840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.961 [2024-11-26 19:31:35.674852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-11-26 19:31:35.684773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.961 [2024-11-26 19:31:35.684815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.961 [2024-11-26 19:31:35.684824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.684830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.684834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.684844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.694811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.694853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.694863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.694868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.694873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.694883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.704848] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.704888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.704907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.704914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.704919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.704933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.714847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.714886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.714897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.714902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.714907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.714919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.724891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.724969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.724979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.724984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.724989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.725000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.734939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.734981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.734991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.734996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.735001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.735011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.744947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.745026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.745035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.745041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.745045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.745055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.754964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.755007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.755017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.755022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.755027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.755037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.764974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.765017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.765027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.765035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.765040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.765050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.775024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.775068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.775078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.775083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.775088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.962 [2024-11-26 19:31:35.775098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-11-26 19:31:35.785061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.962 [2024-11-26 19:31:35.785107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.962 [2024-11-26 19:31:35.785117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.962 [2024-11-26 19:31:35.785122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.962 [2024-11-26 19:31:35.785127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.963 [2024-11-26 19:31:35.785138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-11-26 19:31:35.795064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-11-26 19:31:35.795123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-11-26 19:31:35.795133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-11-26 19:31:35.795138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-11-26 19:31:35.795143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.963 [2024-11-26 19:31:35.795153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-11-26 19:31:35.805095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-11-26 19:31:35.805139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-11-26 19:31:35.805148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-11-26 19:31:35.805154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-11-26 19:31:35.805158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.963 [2024-11-26 19:31:35.805171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-11-26 19:31:35.815154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.963 [2024-11-26 19:31:35.815195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.963 [2024-11-26 19:31:35.815205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.963 [2024-11-26 19:31:35.815210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.963 [2024-11-26 19:31:35.815215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:01.963 [2024-11-26 19:31:35.815225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:01.963 qpair failed and we were unable to recover it. 00:25:02.224 [2024-11-26 19:31:35.825147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-11-26 19:31:35.825184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-11-26 19:31:35.825194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-11-26 19:31:35.825199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-11-26 19:31:35.825205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.224 [2024-11-26 19:31:35.825214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-11-26 19:31:35.835186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-11-26 19:31:35.835231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-11-26 19:31:35.835241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-11-26 19:31:35.835246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-11-26 19:31:35.835251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.224 [2024-11-26 19:31:35.835261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-11-26 19:31:35.845234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-11-26 19:31:35.845277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.224 [2024-11-26 19:31:35.845287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.224 [2024-11-26 19:31:35.845292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.224 [2024-11-26 19:31:35.845297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.224 [2024-11-26 19:31:35.845307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.224 qpair failed and we were unable to recover it. 00:25:02.224 [2024-11-26 19:31:35.855121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.224 [2024-11-26 19:31:35.855166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.855176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.855182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.855186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.855197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.865292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.865338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.865348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.865353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.865358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.865368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.875313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.875352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.875362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.875367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.875372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.875382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.885351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.885393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.885403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.885408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.885413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.885422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.895363] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.895402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.895414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.895420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.895424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.895434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.905371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.905417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.905427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.905433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.905438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.905448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.915399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.915458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.915467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.915472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.915477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.915487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.925433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.925475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.925484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.925489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.925493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.925504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.935430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.935473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.935483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.935488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.935495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.935505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.945482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.945560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.945569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.945574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.945579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.945589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.955506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.955544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.225 [2024-11-26 19:31:35.955553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.225 [2024-11-26 19:31:35.955558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.225 [2024-11-26 19:31:35.955563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.225 [2024-11-26 19:31:35.955573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.225 qpair failed and we were unable to recover it. 00:25:02.225 [2024-11-26 19:31:35.965547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.225 [2024-11-26 19:31:35.965600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:35.965609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:35.965614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:35.965618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:35.965628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:35.975558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:35.975602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:35.975612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:35.975617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:35.975621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:35.975631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:35.985577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:35.985617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:35.985627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:35.985632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:35.985637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:35.985647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:35.995620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:35.995657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:35.995667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:35.995673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:35.995677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:35.995687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.005514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.005556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.005565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.005571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.005576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.005586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.015726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.015799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.015809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.015814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.015819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.015829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.025708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.025811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.025823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.025829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.025834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.025844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.035721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.035759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.035769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.035774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.035779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.035788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.045779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.045820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.045829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.045835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.045839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.045849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.055806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.055851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.055861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.055866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.055870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.055880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.065800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.065846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.065855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.065861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.065868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.065878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.075901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.075945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.075955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.075960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.075965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.075975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.226 [2024-11-26 19:31:36.085899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.226 [2024-11-26 19:31:36.085942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.226 [2024-11-26 19:31:36.085952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.226 [2024-11-26 19:31:36.085957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.226 [2024-11-26 19:31:36.085962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.226 [2024-11-26 19:31:36.085972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.226 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.095786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.095831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.095841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.095846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.095851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.095861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.105924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.105963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.105973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.105978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.105983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.105995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.115901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.115943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.115953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.115958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.115963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.115973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.125962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.126002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.126014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.126019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.126024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.126035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.136000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.136039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.136049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.136055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.136059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.136070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.146013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.146063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.146072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.146078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.146082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.146092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.156032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.156071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.156081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.156086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.156091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.156104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.166029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.166071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.166080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.166085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.166090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.166103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.175964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.176007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.488 [2024-11-26 19:31:36.176017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.488 [2024-11-26 19:31:36.176022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.488 [2024-11-26 19:31:36.176026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.488 [2024-11-26 19:31:36.176036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.488 qpair failed and we were unable to recover it. 00:25:02.488 [2024-11-26 19:31:36.186127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.488 [2024-11-26 19:31:36.186164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.186174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.186179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.186184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.186194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.196002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.196071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.196081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.196089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.196094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.196106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.206182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.206263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.206272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.206277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.206282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.206292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.216193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.216234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.216244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.216250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.216254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.216265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.226247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.226286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.226296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.226301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.226306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.226316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.236294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.236330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.236340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.236345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.236349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.236362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.246293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.246366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.246375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.246381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.246386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.246395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.256323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.256368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.256377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.256382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.256387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.256397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.266339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.266374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.266384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.266389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.266394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.266403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.276335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.276372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.276381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.276387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.276391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.276401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.286385] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.286428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.286437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.286442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.286447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.286457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.296457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.296503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.296512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.296517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.296522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.296532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.306306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.306344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.489 [2024-11-26 19:31:36.306353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.489 [2024-11-26 19:31:36.306358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.489 [2024-11-26 19:31:36.306363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.489 [2024-11-26 19:31:36.306373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.489 qpair failed and we were unable to recover it. 00:25:02.489 [2024-11-26 19:31:36.316432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.489 [2024-11-26 19:31:36.316469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.490 [2024-11-26 19:31:36.316478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.490 [2024-11-26 19:31:36.316484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.490 [2024-11-26 19:31:36.316488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.490 [2024-11-26 19:31:36.316498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-11-26 19:31:36.326452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.490 [2024-11-26 19:31:36.326535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.490 [2024-11-26 19:31:36.326544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.490 [2024-11-26 19:31:36.326555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.490 [2024-11-26 19:31:36.326561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.490 [2024-11-26 19:31:36.326571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-11-26 19:31:36.336527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.490 [2024-11-26 19:31:36.336571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.490 [2024-11-26 19:31:36.336581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.490 [2024-11-26 19:31:36.336586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.490 [2024-11-26 19:31:36.336590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.490 [2024-11-26 19:31:36.336600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.490 [2024-11-26 19:31:36.346554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.490 [2024-11-26 19:31:36.346600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.490 [2024-11-26 19:31:36.346609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.490 [2024-11-26 19:31:36.346615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.490 [2024-11-26 19:31:36.346619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.490 [2024-11-26 19:31:36.346629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.490 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.356547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.356587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.356596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.356601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.356606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.356616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.366598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.366637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.366647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.366652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.366657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.366670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.376650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.376692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.376701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.376707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.376711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.376721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.386657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.386699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.386708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.386714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.386719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.386729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.396646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.396688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.396697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.396703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.396708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.396718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.406729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.406793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.406802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.406808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.406812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.406822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.416745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.416827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.416837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.416842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.416847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.416858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.426755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.426805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.426815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.426820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.426825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.426835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.436746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.436785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.436795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.436800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.436805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.436815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.446790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.446864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.753 [2024-11-26 19:31:36.446874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.753 [2024-11-26 19:31:36.446879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.753 [2024-11-26 19:31:36.446884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.753 [2024-11-26 19:31:36.446894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.753 qpair failed and we were unable to recover it. 00:25:02.753 [2024-11-26 19:31:36.456837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.753 [2024-11-26 19:31:36.456887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.456899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.456904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.456909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.456918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.466855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.466897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.466907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.466913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.466918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.466928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.476860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.476900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.476909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.476915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.476919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.476929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.486892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.486945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.486954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.486959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.486964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.486974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.496919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.496962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.496972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.496977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.496984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.496995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.506924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.506959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.506969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.506974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.506979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.506989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.516987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.517035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.517044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.517050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.517054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.517064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.527030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.527076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.527085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.527090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.527095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.527109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.537062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.537146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.537155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.537161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.537165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.537176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.547047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.754 [2024-11-26 19:31:36.547103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.754 [2024-11-26 19:31:36.547113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.754 [2024-11-26 19:31:36.547118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.754 [2024-11-26 19:31:36.547123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.754 [2024-11-26 19:31:36.547133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.754 qpair failed and we were unable to recover it. 00:25:02.754 [2024-11-26 19:31:36.557090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.755 [2024-11-26 19:31:36.557130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.755 [2024-11-26 19:31:36.557140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.755 [2024-11-26 19:31:36.557145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.755 [2024-11-26 19:31:36.557150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.755 [2024-11-26 19:31:36.557160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.755 qpair failed and we were unable to recover it. 00:25:02.755 [2024-11-26 19:31:36.566993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.755 [2024-11-26 19:31:36.567037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.755 [2024-11-26 19:31:36.567046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.755 [2024-11-26 19:31:36.567052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.755 [2024-11-26 19:31:36.567056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.755 [2024-11-26 19:31:36.567066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.755 qpair failed and we were unable to recover it. 00:25:02.755 [2024-11-26 19:31:36.577141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.755 [2024-11-26 19:31:36.577186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.755 [2024-11-26 19:31:36.577196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.755 [2024-11-26 19:31:36.577201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.755 [2024-11-26 19:31:36.577206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.755 [2024-11-26 19:31:36.577216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.755 qpair failed and we were unable to recover it. 00:25:02.755 [2024-11-26 19:31:36.587151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.755 [2024-11-26 19:31:36.587188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.755 [2024-11-26 19:31:36.587200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.755 [2024-11-26 19:31:36.587205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.755 [2024-11-26 19:31:36.587210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.755 [2024-11-26 19:31:36.587220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.755 qpair failed and we were unable to recover it. 00:25:02.755 [2024-11-26 19:31:36.597173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.755 [2024-11-26 19:31:36.597245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.755 [2024-11-26 19:31:36.597254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.755 [2024-11-26 19:31:36.597260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.755 [2024-11-26 19:31:36.597265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.755 [2024-11-26 19:31:36.597274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.755 qpair failed and we were unable to recover it. 00:25:02.755 [2024-11-26 19:31:36.607243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.755 [2024-11-26 19:31:36.607283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.755 [2024-11-26 19:31:36.607293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.755 [2024-11-26 19:31:36.607298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.755 [2024-11-26 19:31:36.607302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:02.755 [2024-11-26 19:31:36.607312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:02.755 qpair failed and we were unable to recover it. 00:25:03.017 [2024-11-26 19:31:36.617260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.017 [2024-11-26 19:31:36.617303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.017 [2024-11-26 19:31:36.617312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.017 [2024-11-26 19:31:36.617318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.017 [2024-11-26 19:31:36.617322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.017 [2024-11-26 19:31:36.617332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-11-26 19:31:36.627295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.017 [2024-11-26 19:31:36.627343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.017 [2024-11-26 19:31:36.627353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.017 [2024-11-26 19:31:36.627358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.017 [2024-11-26 19:31:36.627365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.017 [2024-11-26 19:31:36.627375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-11-26 19:31:36.637345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.017 [2024-11-26 19:31:36.637419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.017 [2024-11-26 19:31:36.637430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.017 [2024-11-26 19:31:36.637435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.017 [2024-11-26 19:31:36.637441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.017 [2024-11-26 19:31:36.637452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-11-26 19:31:36.647208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.017 [2024-11-26 19:31:36.647247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.017 [2024-11-26 19:31:36.647256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.017 [2024-11-26 19:31:36.647261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.017 [2024-11-26 19:31:36.647266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.017 [2024-11-26 19:31:36.647276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-11-26 19:31:36.657373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.017 [2024-11-26 19:31:36.657416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.017 [2024-11-26 19:31:36.657426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.017 [2024-11-26 19:31:36.657431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.017 [2024-11-26 19:31:36.657435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.017 [2024-11-26 19:31:36.657445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-11-26 19:31:36.667410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.017 [2024-11-26 19:31:36.667451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.017 [2024-11-26 19:31:36.667461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.017 [2024-11-26 19:31:36.667466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.017 [2024-11-26 19:31:36.667470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.017 [2024-11-26 19:31:36.667480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-11-26 19:31:36.677406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.017 [2024-11-26 19:31:36.677443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.017 [2024-11-26 19:31:36.677452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.017 [2024-11-26 19:31:36.677458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.017 [2024-11-26 19:31:36.677462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.017 [2024-11-26 19:31:36.677472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.017 qpair failed and we were unable to recover it. 00:25:03.017 [2024-11-26 19:31:36.687443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.017 [2024-11-26 19:31:36.687480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.017 [2024-11-26 19:31:36.687489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.017 [2024-11-26 19:31:36.687495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.017 [2024-11-26 19:31:36.687500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.017 [2024-11-26 19:31:36.687509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.697521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.697591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.697600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.697606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.697610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.697620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.707473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.707510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.707519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.707525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.707529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.707539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.717532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.717573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.717583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.717588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.717593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.717603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.727563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.727605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.727615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.727620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.727625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.727635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.737600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.737643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.737652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.737657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.737662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.737672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.747599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.747640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.747649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.747654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.747659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.747669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.757631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.757680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.757690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.757697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.757702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.757712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.767673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.767712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.767722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.767727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.767731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.767741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.777748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.777820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.777830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.777835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.777839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.777849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.787715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.787757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.787767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.787772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.787777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.787786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.797798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.797863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.797873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.797878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.797883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.797895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.807780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.807824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.807842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.807848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.807853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.018 [2024-11-26 19:31:36.807866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.018 qpair failed and we were unable to recover it. 00:25:03.018 [2024-11-26 19:31:36.817681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.018 [2024-11-26 19:31:36.817724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.018 [2024-11-26 19:31:36.817735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.018 [2024-11-26 19:31:36.817740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.018 [2024-11-26 19:31:36.817745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.019 [2024-11-26 19:31:36.817756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-11-26 19:31:36.827828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.019 [2024-11-26 19:31:36.827865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.019 [2024-11-26 19:31:36.827875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.019 [2024-11-26 19:31:36.827880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.019 [2024-11-26 19:31:36.827885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.019 [2024-11-26 19:31:36.827895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-11-26 19:31:36.837863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.019 [2024-11-26 19:31:36.837906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.019 [2024-11-26 19:31:36.837924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.019 [2024-11-26 19:31:36.837930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.019 [2024-11-26 19:31:36.837935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.019 [2024-11-26 19:31:36.837949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-11-26 19:31:36.847903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.019 [2024-11-26 19:31:36.847978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.019 [2024-11-26 19:31:36.847997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.019 [2024-11-26 19:31:36.848003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.019 [2024-11-26 19:31:36.848009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.019 [2024-11-26 19:31:36.848024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-11-26 19:31:36.857960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.019 [2024-11-26 19:31:36.858037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.019 [2024-11-26 19:31:36.858047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.019 [2024-11-26 19:31:36.858053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.019 [2024-11-26 19:31:36.858058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.019 [2024-11-26 19:31:36.858069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-11-26 19:31:36.867947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.019 [2024-11-26 19:31:36.867989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.019 [2024-11-26 19:31:36.867999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.019 [2024-11-26 19:31:36.868004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.019 [2024-11-26 19:31:36.868009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.019 [2024-11-26 19:31:36.868019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.019 [2024-11-26 19:31:36.877940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.019 [2024-11-26 19:31:36.877978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.019 [2024-11-26 19:31:36.877987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.019 [2024-11-26 19:31:36.877992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.019 [2024-11-26 19:31:36.877997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.019 [2024-11-26 19:31:36.878007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.019 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.887890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.887932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.887944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.887952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.887957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.887968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.897975] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.898015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.898025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.898031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.898036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.898046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.908022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.908060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.908070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.908076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.908081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.908091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.917947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.917986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.917996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.918002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.918007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.918017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.928103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.928145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.928155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.928161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.928165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.928185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.938132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.938182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.938198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.938203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.938208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.938222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.948123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.948162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.948172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.948178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.948183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.948193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.958177] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.958221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.958231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.958236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.958241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.958251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.968205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.281 [2024-11-26 19:31:36.968248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.281 [2024-11-26 19:31:36.968258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.281 [2024-11-26 19:31:36.968263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.281 [2024-11-26 19:31:36.968267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.281 [2024-11-26 19:31:36.968277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.281 qpair failed and we were unable to recover it. 00:25:03.281 [2024-11-26 19:31:36.978238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:36.978291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:36.978300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:36.978306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:36.978310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:36.978321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:36.988257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:36.988303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:36.988313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:36.988318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:36.988323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:36.988333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:36.998285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:36.998324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:36.998333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:36.998339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:36.998344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:36.998354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.008326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.008368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.008377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.008383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.008387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.008397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.018353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.018418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.018431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.018436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.018441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.018451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.028389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.028432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.028442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.028448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.028452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.028462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.038260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.038297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.038308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.038314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.038318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.038329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.048429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.048515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.048526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.048531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.048536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.048546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.058457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.058500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.058510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.058515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.058526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.058536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.068474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.068511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.068521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.068526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.068531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.068541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.078449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.078520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.078530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.078535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.078540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.078550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.088508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.088548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.088558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.088563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.088568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.282 [2024-11-26 19:31:37.088578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.282 qpair failed and we were unable to recover it. 00:25:03.282 [2024-11-26 19:31:37.098549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.282 [2024-11-26 19:31:37.098591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.282 [2024-11-26 19:31:37.098601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.282 [2024-11-26 19:31:37.098606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.282 [2024-11-26 19:31:37.098611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.283 [2024-11-26 19:31:37.098621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.283 qpair failed and we were unable to recover it. 00:25:03.283 [2024-11-26 19:31:37.108553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.283 [2024-11-26 19:31:37.108618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.283 [2024-11-26 19:31:37.108628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.283 [2024-11-26 19:31:37.108634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.283 [2024-11-26 19:31:37.108638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.283 [2024-11-26 19:31:37.108648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.283 qpair failed and we were unable to recover it. 00:25:03.283 [2024-11-26 19:31:37.118620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.283 [2024-11-26 19:31:37.118673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.283 [2024-11-26 19:31:37.118683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.283 [2024-11-26 19:31:37.118688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.283 [2024-11-26 19:31:37.118692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.283 [2024-11-26 19:31:37.118702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.283 qpair failed and we were unable to recover it. 00:25:03.283 [2024-11-26 19:31:37.128645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.283 [2024-11-26 19:31:37.128685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.283 [2024-11-26 19:31:37.128694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.283 [2024-11-26 19:31:37.128700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.283 [2024-11-26 19:31:37.128704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.283 [2024-11-26 19:31:37.128715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.283 qpair failed and we were unable to recover it. 00:25:03.283 [2024-11-26 19:31:37.138664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.283 [2024-11-26 19:31:37.138707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.283 [2024-11-26 19:31:37.138717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.283 [2024-11-26 19:31:37.138722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.283 [2024-11-26 19:31:37.138727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.283 [2024-11-26 19:31:37.138737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.283 qpair failed and we were unable to recover it. 00:25:03.544 [2024-11-26 19:31:37.148667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.544 [2024-11-26 19:31:37.148711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.544 [2024-11-26 19:31:37.148724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.544 [2024-11-26 19:31:37.148729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.544 [2024-11-26 19:31:37.148734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.544 [2024-11-26 19:31:37.148744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.544 qpair failed and we were unable to recover it. 00:25:03.544 [2024-11-26 19:31:37.158708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.544 [2024-11-26 19:31:37.158752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.544 [2024-11-26 19:31:37.158761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.544 [2024-11-26 19:31:37.158767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.544 [2024-11-26 19:31:37.158771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.544 [2024-11-26 19:31:37.158781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.544 qpair failed and we were unable to recover it. 00:25:03.544 [2024-11-26 19:31:37.168751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.544 [2024-11-26 19:31:37.168798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.544 [2024-11-26 19:31:37.168816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.168822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.168827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.168842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.178779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.178846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.178865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.178871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.178876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.178890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.188798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.188843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.188861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.188867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.188876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.188890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.198877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.198947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.198958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.198964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.198968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.198979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.208718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.208759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.208768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.208774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.208779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.208789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.218895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.219000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.219010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.219016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.219022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.219032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.228927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.228971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.228981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.228986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.228991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.229001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.238929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.238971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.238981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.238987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.238991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.239001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.248965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.249007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.249016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.249022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.249026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.249036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.259003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.259046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.259056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.259062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.259066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.259076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.268988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.269067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.269077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.269082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.269087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.269098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.279033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.279074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.279084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.279089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.279094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.279107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.289046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.289084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.289094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.289102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.545 [2024-11-26 19:31:37.289107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.545 [2024-11-26 19:31:37.289118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.545 qpair failed and we were unable to recover it. 00:25:03.545 [2024-11-26 19:31:37.299113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.545 [2024-11-26 19:31:37.299157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.545 [2024-11-26 19:31:37.299167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.545 [2024-11-26 19:31:37.299173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.299177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.299188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.309073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.309156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.309166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.309171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.309176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.309186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.319146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.319215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.319224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.319233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.319237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.319248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.329153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.329198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.329208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.329213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.329217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.329228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.339192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.339238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.339247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.339252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.339257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.339267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.349256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.349318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.349328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.349333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.349338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.349348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.359278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.359323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.359333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.359338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.359343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.359356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.369302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.369344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.369353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.369359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.369363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.369373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.379326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.379372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.379382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.379387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.379392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.379401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.389339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.389385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.389394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.389400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.389404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.389414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.546 [2024-11-26 19:31:37.399275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.546 [2024-11-26 19:31:37.399310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.546 [2024-11-26 19:31:37.399320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.546 [2024-11-26 19:31:37.399325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.546 [2024-11-26 19:31:37.399330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.546 [2024-11-26 19:31:37.399339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.546 qpair failed and we were unable to recover it. 00:25:03.807 [2024-11-26 19:31:37.409391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.807 [2024-11-26 19:31:37.409438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.807 [2024-11-26 19:31:37.409447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.807 [2024-11-26 19:31:37.409453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.807 [2024-11-26 19:31:37.409457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.807 [2024-11-26 19:31:37.409467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.807 qpair failed and we were unable to recover it. 00:25:03.807 [2024-11-26 19:31:37.419427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.807 [2024-11-26 19:31:37.419473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.807 [2024-11-26 19:31:37.419483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.807 [2024-11-26 19:31:37.419488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.807 [2024-11-26 19:31:37.419493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.807 [2024-11-26 19:31:37.419503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.807 qpair failed and we were unable to recover it. 00:25:03.807 [2024-11-26 19:31:37.429396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.807 [2024-11-26 19:31:37.429434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.807 [2024-11-26 19:31:37.429444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.807 [2024-11-26 19:31:37.429449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.807 [2024-11-26 19:31:37.429454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.807 [2024-11-26 19:31:37.429464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.439471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.439546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.439556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.439561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.439566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.439576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.449500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.449544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.449556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.449562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.449567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.449577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.459505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.459550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.459560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.459566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.459571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.459582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.469572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.469608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.469618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.469623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.469628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.469638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.479543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.479627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.479637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.479642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.479647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.479657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.489607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.489652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.489662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.489667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.489672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.489685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.499644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.499720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.499729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.499735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.499739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.499749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.509663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.509701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.509711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.509717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.509721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.509732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.519685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.519733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.519743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.519748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.519752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.519762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.529721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.529758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.529768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.529773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.529777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.529787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.539777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.539860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.539870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.539875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.539880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.539890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.549773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.549819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.549829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.549834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.549838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.808 [2024-11-26 19:31:37.549849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.808 qpair failed and we were unable to recover it. 00:25:03.808 [2024-11-26 19:31:37.559794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.808 [2024-11-26 19:31:37.559830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.808 [2024-11-26 19:31:37.559840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.808 [2024-11-26 19:31:37.559845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.808 [2024-11-26 19:31:37.559850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.559860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.569832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.569890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.569899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.569904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.569909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.569919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.579842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.579889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.579901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.579906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.579911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.579921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.589858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.589901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.589910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.589916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.589921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.589930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.599878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.599935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.599945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.599950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.599954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.599964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.609914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.609958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.609968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.609973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.609977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.609987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.619944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.619988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.619998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.620003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.620010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.620021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.629969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.630010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.630019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.630025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.630030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.630039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.639846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.639882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.639892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.639897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.639901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.639911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.649881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.649919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.649928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.649933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.649938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.649948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.659916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.660003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.660013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.660019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.660023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.660034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:03.809 [2024-11-26 19:31:37.670065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.809 [2024-11-26 19:31:37.670111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.809 [2024-11-26 19:31:37.670122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.809 [2024-11-26 19:31:37.670127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.809 [2024-11-26 19:31:37.670131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:03.809 [2024-11-26 19:31:37.670142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:03.809 qpair failed and we were unable to recover it. 00:25:04.071 [2024-11-26 19:31:37.679960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.071 [2024-11-26 19:31:37.680015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.071 [2024-11-26 19:31:37.680026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.071 [2024-11-26 19:31:37.680031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.071 [2024-11-26 19:31:37.680036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.071 [2024-11-26 19:31:37.680046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-11-26 19:31:37.690139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.071 [2024-11-26 19:31:37.690180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.071 [2024-11-26 19:31:37.690190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.071 [2024-11-26 19:31:37.690196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.071 [2024-11-26 19:31:37.690200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.071 [2024-11-26 19:31:37.690211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.071 qpair failed and we were unable to recover it. 00:25:04.071 [2024-11-26 19:31:37.700122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.071 [2024-11-26 19:31:37.700164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.071 [2024-11-26 19:31:37.700173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.071 [2024-11-26 19:31:37.700179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.700183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.700193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.710159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.710198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.710211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.710216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.710220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.710231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.720176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.720230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.720240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.720245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.720250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.720260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.730248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.730292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.730301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.730307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.730311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.730322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.740251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.740294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.740303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.740308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.740313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.740323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.750167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.750249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.750259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.750267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.750271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.750282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.760286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.760378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.760388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.760393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.760398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.760408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.770298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.770368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.770377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.770382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.770386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.770397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.780337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.780384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.780393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.780398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.780403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.780413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.790378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.790417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.790426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.790431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.790436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.790446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.800391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.800427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.800436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.800441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.800446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.800456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.810298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.810337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.810346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.810351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.810356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.810365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.820483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.820523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.820532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.820537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.072 [2024-11-26 19:31:37.820542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.072 [2024-11-26 19:31:37.820552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.072 qpair failed and we were unable to recover it. 00:25:04.072 [2024-11-26 19:31:37.830501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.072 [2024-11-26 19:31:37.830553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.072 [2024-11-26 19:31:37.830562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.072 [2024-11-26 19:31:37.830567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.830572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.830582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.840520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.840565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.840575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.840580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.840584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.840594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.850517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.850555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.850564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.850569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.850574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.850584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.860565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.860629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.860639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.860644] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.860648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.860659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.870450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.870487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.870497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.870502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.870507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.870516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.880596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.880633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.880643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.880651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.880656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.880666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.890641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.890680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.890689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.890695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.890699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.890709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.900677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.900717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.900726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.900732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.900736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.900746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.910712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.910749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.910759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.910764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.910769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.910779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.920586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.920624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.920635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.920640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.920645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.920658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.073 qpair failed and we were unable to recover it. 00:25:04.073 [2024-11-26 19:31:37.930769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.073 [2024-11-26 19:31:37.930811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.073 [2024-11-26 19:31:37.930821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.073 [2024-11-26 19:31:37.930826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.073 [2024-11-26 19:31:37.930831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.073 [2024-11-26 19:31:37.930841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.074 qpair failed and we were unable to recover it. 00:25:04.335 [2024-11-26 19:31:37.940798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.335 [2024-11-26 19:31:37.940879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.335 [2024-11-26 19:31:37.940888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.335 [2024-11-26 19:31:37.940893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.335 [2024-11-26 19:31:37.940898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.335 [2024-11-26 19:31:37.940908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.335 qpair failed and we were unable to recover it. 00:25:04.335 [2024-11-26 19:31:37.950827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.335 [2024-11-26 19:31:37.950872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.335 [2024-11-26 19:31:37.950881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.335 [2024-11-26 19:31:37.950887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.335 [2024-11-26 19:31:37.950891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.335 [2024-11-26 19:31:37.950901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.335 qpair failed and we were unable to recover it. 00:25:04.335 [2024-11-26 19:31:37.960858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:37.960897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:37.960906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:37.960911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:37.960916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:37.960926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:37.970858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:37.970901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:37.970911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:37.970916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:37.970921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:37.970930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:37.980757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:37.980797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:37.980806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:37.980811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:37.980816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:37.980826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:37.990913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:37.990951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:37.990961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:37.990966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:37.990971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:37.990981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:38.000937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:38.001019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:38.001029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:38.001034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:38.001039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:38.001048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:38.010874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:38.010934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:38.010947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:38.010952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:38.010957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:38.010967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:38.020866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:38.020911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:38.020920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:38.020926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:38.020930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:38.020940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:38.031013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:38.031056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:38.031066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:38.031071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:38.031076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:38.031086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:38.041109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:38.041167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.336 [2024-11-26 19:31:38.041177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.336 [2024-11-26 19:31:38.041182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.336 [2024-11-26 19:31:38.041187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.336 [2024-11-26 19:31:38.041197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.336 qpair failed and we were unable to recover it. 00:25:04.336 [2024-11-26 19:31:38.051036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.336 [2024-11-26 19:31:38.051098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.051111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.051116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.051120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.051133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.060994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.061040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.061050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.061055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.061060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.061069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.071123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.071163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.071172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.071178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.071182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.071193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.081162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.081198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.081208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.081213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.081218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.081227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.091186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.091228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.091237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.091242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.091247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.091257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.101253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.101299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.101309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.101314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.101319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.101328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.111249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.111287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.111296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.111302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.111306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.111316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.121264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.121300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.121309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.121315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.121319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.121329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.131314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.131355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.131364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.337 [2024-11-26 19:31:38.131369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.337 [2024-11-26 19:31:38.131374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.337 [2024-11-26 19:31:38.131384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.337 qpair failed and we were unable to recover it. 00:25:04.337 [2024-11-26 19:31:38.141195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.337 [2024-11-26 19:31:38.141271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.337 [2024-11-26 19:31:38.141283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.338 [2024-11-26 19:31:38.141289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.338 [2024-11-26 19:31:38.141293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.338 [2024-11-26 19:31:38.141303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.338 qpair failed and we were unable to recover it. 00:25:04.338 [2024-11-26 19:31:38.151321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.338 [2024-11-26 19:31:38.151356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.338 [2024-11-26 19:31:38.151366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.338 [2024-11-26 19:31:38.151371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.338 [2024-11-26 19:31:38.151376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.338 [2024-11-26 19:31:38.151385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.338 qpair failed and we were unable to recover it. 00:25:04.338 [2024-11-26 19:31:38.161377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.338 [2024-11-26 19:31:38.161414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.338 [2024-11-26 19:31:38.161423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.338 [2024-11-26 19:31:38.161429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.338 [2024-11-26 19:31:38.161434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.338 [2024-11-26 19:31:38.161444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.338 qpair failed and we were unable to recover it. 00:25:04.338 [2024-11-26 19:31:38.171438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.338 [2024-11-26 19:31:38.171477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.338 [2024-11-26 19:31:38.171486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.338 [2024-11-26 19:31:38.171491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.338 [2024-11-26 19:31:38.171496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.338 [2024-11-26 19:31:38.171506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.338 qpair failed and we were unable to recover it. 00:25:04.338 [2024-11-26 19:31:38.181458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.338 [2024-11-26 19:31:38.181502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.338 [2024-11-26 19:31:38.181511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.338 [2024-11-26 19:31:38.181517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.338 [2024-11-26 19:31:38.181524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.338 [2024-11-26 19:31:38.181534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.338 qpair failed and we were unable to recover it. 00:25:04.338 [2024-11-26 19:31:38.191464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.338 [2024-11-26 19:31:38.191499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.338 [2024-11-26 19:31:38.191509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.338 [2024-11-26 19:31:38.191514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.338 [2024-11-26 19:31:38.191518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.338 [2024-11-26 19:31:38.191528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.338 qpair failed and we were unable to recover it. 00:25:04.601 [2024-11-26 19:31:38.201487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.601 [2024-11-26 19:31:38.201540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.601 [2024-11-26 19:31:38.201550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.601 [2024-11-26 19:31:38.201555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.601 [2024-11-26 19:31:38.201560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.601 [2024-11-26 19:31:38.201570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.601 qpair failed and we were unable to recover it. 00:25:04.601 [2024-11-26 19:31:38.211511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.601 [2024-11-26 19:31:38.211581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.601 [2024-11-26 19:31:38.211590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.601 [2024-11-26 19:31:38.211595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.601 [2024-11-26 19:31:38.211600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.601 [2024-11-26 19:31:38.211610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.601 qpair failed and we were unable to recover it. 00:25:04.601 [2024-11-26 19:31:38.221560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.221609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.221618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.221624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.221628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.221639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.231569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.231612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.231621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.231627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.231631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.231641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.241579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.241641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.241650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.241656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.241660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.241670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.251592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.251669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.251679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.251684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.251689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.251698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.261672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.261712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.261723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.261729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.261734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.261744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.271666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.271725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.271737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.271743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.271747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.271757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.281719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.281757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.281766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.281771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.281776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.281786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.291738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.291778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.291787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.291793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.291797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.291807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.301769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.301808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.301818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.301823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.301827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.301837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.311775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.311822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.311831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.311839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.311844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.311854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.321691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.321755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.321764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.321769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.321774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.321784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.331850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.331891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.331900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.331905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.331910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.602 [2024-11-26 19:31:38.331920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.602 qpair failed and we were unable to recover it. 00:25:04.602 [2024-11-26 19:31:38.341866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.602 [2024-11-26 19:31:38.341914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.602 [2024-11-26 19:31:38.341932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.602 [2024-11-26 19:31:38.341938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.602 [2024-11-26 19:31:38.341943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.341957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.351916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.351961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.351979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.351985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.351990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.352004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.361921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.362008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.362019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.362025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.362030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.362041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.371931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.371974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.371984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.371989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.371994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.372004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.381979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.382021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.382031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.382037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.382041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.382051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.391999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.392037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.392047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.392052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.392057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.392067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.402088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.402145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.402155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.402160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.402165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.402175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.411940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.412001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.412011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.412016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.412021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.412032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.422078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.422127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.422137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.422142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.422147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.422158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.432118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.432156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.432166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.432171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.432175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.432185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.442133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.442176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.442185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.442193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.442198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.442208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.452166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.452209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.452218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.452224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.452228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.452238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.603 [2024-11-26 19:31:38.462104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.603 [2024-11-26 19:31:38.462158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.603 [2024-11-26 19:31:38.462167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.603 [2024-11-26 19:31:38.462173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.603 [2024-11-26 19:31:38.462177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.603 [2024-11-26 19:31:38.462188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.603 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.472196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.472285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.472295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.865 [2024-11-26 19:31:38.472300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.865 [2024-11-26 19:31:38.472306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.865 [2024-11-26 19:31:38.472316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.482247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.482284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.482294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.865 [2024-11-26 19:31:38.482299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.865 [2024-11-26 19:31:38.482304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.865 [2024-11-26 19:31:38.482317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.492269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.492310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.492319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.865 [2024-11-26 19:31:38.492324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.865 [2024-11-26 19:31:38.492329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.865 [2024-11-26 19:31:38.492339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.502299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.502341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.502350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.865 [2024-11-26 19:31:38.502355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.865 [2024-11-26 19:31:38.502360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.865 [2024-11-26 19:31:38.502370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.512336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.512379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.512389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.865 [2024-11-26 19:31:38.512395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.865 [2024-11-26 19:31:38.512400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.865 [2024-11-26 19:31:38.512410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.522359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.522407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.522416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.865 [2024-11-26 19:31:38.522421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.865 [2024-11-26 19:31:38.522426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.865 [2024-11-26 19:31:38.522436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.532254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.532293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.532302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.865 [2024-11-26 19:31:38.532308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.865 [2024-11-26 19:31:38.532312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.865 [2024-11-26 19:31:38.532322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.542422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.542462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.542472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.865 [2024-11-26 19:31:38.542477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.865 [2024-11-26 19:31:38.542482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.865 [2024-11-26 19:31:38.542492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.865 qpair failed and we were unable to recover it. 00:25:04.865 [2024-11-26 19:31:38.552453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.865 [2024-11-26 19:31:38.552497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.865 [2024-11-26 19:31:38.552507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.552512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.552517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.552527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.562427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.562465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.562474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.562480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.562484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.562494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.572512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.572552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.572567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.572573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.572577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.572587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.582533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.582574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.582584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.582589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.582593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.582603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.592561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.592604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.592613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.592618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.592623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.592633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.602602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.602644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.602654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.602659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.602664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.602674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.612533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.612573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.612583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.612589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.612596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.612607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.622510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.622585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.622595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.622601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.622606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.622616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.632665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.632709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.632719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.632724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.632729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.632739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.642692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.642730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.642740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.642745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.642750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.642760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.652734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.652780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.652791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.652796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.652801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.652811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.662738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.662779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.662790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.662796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.662800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.662810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.672783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.672822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.672832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.672838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.866 [2024-11-26 19:31:38.672842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.866 [2024-11-26 19:31:38.672853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.866 qpair failed and we were unable to recover it. 00:25:04.866 [2024-11-26 19:31:38.682807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.866 [2024-11-26 19:31:38.682855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.866 [2024-11-26 19:31:38.682865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.866 [2024-11-26 19:31:38.682870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.867 [2024-11-26 19:31:38.682875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.867 [2024-11-26 19:31:38.682885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-11-26 19:31:38.692851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.867 [2024-11-26 19:31:38.692906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.867 [2024-11-26 19:31:38.692916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.867 [2024-11-26 19:31:38.692921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.867 [2024-11-26 19:31:38.692926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.867 [2024-11-26 19:31:38.692936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-11-26 19:31:38.702850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.867 [2024-11-26 19:31:38.702890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.867 [2024-11-26 19:31:38.702902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.867 [2024-11-26 19:31:38.702908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.867 [2024-11-26 19:31:38.702912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.867 [2024-11-26 19:31:38.702923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-11-26 19:31:38.712908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.867 [2024-11-26 19:31:38.712963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.867 [2024-11-26 19:31:38.712972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.867 [2024-11-26 19:31:38.712978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.867 [2024-11-26 19:31:38.712982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.867 [2024-11-26 19:31:38.712992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.867 qpair failed and we were unable to recover it. 00:25:04.867 [2024-11-26 19:31:38.722913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.867 [2024-11-26 19:31:38.722952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.867 [2024-11-26 19:31:38.722962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.867 [2024-11-26 19:31:38.722967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.867 [2024-11-26 19:31:38.722972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:04.867 [2024-11-26 19:31:38.722982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:04.867 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.732945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.732991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.733000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.130 [2024-11-26 19:31:38.733006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.130 [2024-11-26 19:31:38.733011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.130 [2024-11-26 19:31:38.733021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.130 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.742989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.743030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.743040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.130 [2024-11-26 19:31:38.743045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.130 [2024-11-26 19:31:38.743052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.130 [2024-11-26 19:31:38.743062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.130 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.752997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.753037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.753047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.130 [2024-11-26 19:31:38.753053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.130 [2024-11-26 19:31:38.753058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.130 [2024-11-26 19:31:38.753068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.130 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.763033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.763071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.763081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.130 [2024-11-26 19:31:38.763086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.130 [2024-11-26 19:31:38.763091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.130 [2024-11-26 19:31:38.763105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.130 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.773034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.773079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.773088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.130 [2024-11-26 19:31:38.773094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.130 [2024-11-26 19:31:38.773098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.130 [2024-11-26 19:31:38.773112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.130 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.783090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.783189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.783199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.130 [2024-11-26 19:31:38.783204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.130 [2024-11-26 19:31:38.783209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.130 [2024-11-26 19:31:38.783219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.130 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.793076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.793126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.793137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.130 [2024-11-26 19:31:38.793142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.130 [2024-11-26 19:31:38.793147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.130 [2024-11-26 19:31:38.793157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.130 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.803138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.803178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.803187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.130 [2024-11-26 19:31:38.803192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.130 [2024-11-26 19:31:38.803197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.130 [2024-11-26 19:31:38.803207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.130 qpair failed and we were unable to recover it. 00:25:05.130 [2024-11-26 19:31:38.813025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.130 [2024-11-26 19:31:38.813066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.130 [2024-11-26 19:31:38.813077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.813082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.813087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.813097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.823175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.823220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.823230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.823235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.823240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.823250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.833192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.833237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.833249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.833255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.833259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.833270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.843241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.843282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.843291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.843297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.843301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.843311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.853264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.853309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.853319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.853324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.853329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.853339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.863320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.863364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.863373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.863378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.863383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.863393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.873329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.873370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.873379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.873387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.873392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.873402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.883323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.883370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.883379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.883385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.883389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.883399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.893361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.893421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.893431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.893436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.893441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.893451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.903399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.903442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.903452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.903457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.903461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.131 [2024-11-26 19:31:38.903471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.131 qpair failed and we were unable to recover it. 00:25:05.131 [2024-11-26 19:31:38.913426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.131 [2024-11-26 19:31:38.913463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.131 [2024-11-26 19:31:38.913472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.131 [2024-11-26 19:31:38.913478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.131 [2024-11-26 19:31:38.913482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.132 [2024-11-26 19:31:38.913492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.132 qpair failed and we were unable to recover it. 00:25:05.132 [2024-11-26 19:31:38.923436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.132 [2024-11-26 19:31:38.923477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.132 [2024-11-26 19:31:38.923487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.132 [2024-11-26 19:31:38.923492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.132 [2024-11-26 19:31:38.923497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.132 [2024-11-26 19:31:38.923507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.132 qpair failed and we were unable to recover it. 00:25:05.132 [2024-11-26 19:31:38.933476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.132 [2024-11-26 19:31:38.933517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.132 [2024-11-26 19:31:38.933527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.132 [2024-11-26 19:31:38.933532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.132 [2024-11-26 19:31:38.933537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.132 [2024-11-26 19:31:38.933547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.132 qpair failed and we were unable to recover it. 00:25:05.132 [2024-11-26 19:31:38.943521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.132 [2024-11-26 19:31:38.943601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.132 [2024-11-26 19:31:38.943610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.132 [2024-11-26 19:31:38.943616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.132 [2024-11-26 19:31:38.943620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.132 [2024-11-26 19:31:38.943630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.132 qpair failed and we were unable to recover it. 00:25:05.132 [2024-11-26 19:31:38.953531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.132 [2024-11-26 19:31:38.953574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.132 [2024-11-26 19:31:38.953583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.132 [2024-11-26 19:31:38.953588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.132 [2024-11-26 19:31:38.953593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.132 [2024-11-26 19:31:38.953602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.132 qpair failed and we were unable to recover it. 00:25:05.132 [2024-11-26 19:31:38.963578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.132 [2024-11-26 19:31:38.963662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.132 [2024-11-26 19:31:38.963672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.132 [2024-11-26 19:31:38.963680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.132 [2024-11-26 19:31:38.963687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.132 [2024-11-26 19:31:38.963698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.132 qpair failed and we were unable to recover it. 00:25:05.132 [2024-11-26 19:31:38.973455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.132 [2024-11-26 19:31:38.973497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.132 [2024-11-26 19:31:38.973507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.132 [2024-11-26 19:31:38.973512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.132 [2024-11-26 19:31:38.973517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.132 [2024-11-26 19:31:38.973527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.132 qpair failed and we were unable to recover it. 00:25:05.132 [2024-11-26 19:31:38.983640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.132 [2024-11-26 19:31:38.983685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.132 [2024-11-26 19:31:38.983694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.132 [2024-11-26 19:31:38.983699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.132 [2024-11-26 19:31:38.983704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.132 [2024-11-26 19:31:38.983714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.132 qpair failed and we were unable to recover it. 00:25:05.394 [2024-11-26 19:31:38.993513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.394 [2024-11-26 19:31:38.993550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.394 [2024-11-26 19:31:38.993562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.394 [2024-11-26 19:31:38.993568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.394 [2024-11-26 19:31:38.993573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.394 [2024-11-26 19:31:38.993584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-11-26 19:31:39.003676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.394 [2024-11-26 19:31:39.003715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.394 [2024-11-26 19:31:39.003725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.394 [2024-11-26 19:31:39.003734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.394 [2024-11-26 19:31:39.003738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.394 [2024-11-26 19:31:39.003749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-11-26 19:31:39.013708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.394 [2024-11-26 19:31:39.013750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.394 [2024-11-26 19:31:39.013760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.394 [2024-11-26 19:31:39.013765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.394 [2024-11-26 19:31:39.013770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.394 [2024-11-26 19:31:39.013780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-11-26 19:31:39.023742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.394 [2024-11-26 19:31:39.023828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.394 [2024-11-26 19:31:39.023838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.394 [2024-11-26 19:31:39.023843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.394 [2024-11-26 19:31:39.023848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.394 [2024-11-26 19:31:39.023858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-11-26 19:31:39.033757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.394 [2024-11-26 19:31:39.033797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.394 [2024-11-26 19:31:39.033806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.394 [2024-11-26 19:31:39.033812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.394 [2024-11-26 19:31:39.033817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.394 [2024-11-26 19:31:39.033827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-11-26 19:31:39.043786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.394 [2024-11-26 19:31:39.043820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.394 [2024-11-26 19:31:39.043830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.394 [2024-11-26 19:31:39.043835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.394 [2024-11-26 19:31:39.043840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.394 [2024-11-26 19:31:39.043852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-11-26 19:31:39.053798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.394 [2024-11-26 19:31:39.053843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.394 [2024-11-26 19:31:39.053853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.394 [2024-11-26 19:31:39.053859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.394 [2024-11-26 19:31:39.053863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.394 [2024-11-26 19:31:39.053873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.394 [2024-11-26 19:31:39.063841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.394 [2024-11-26 19:31:39.063890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.394 [2024-11-26 19:31:39.063899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.394 [2024-11-26 19:31:39.063904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.394 [2024-11-26 19:31:39.063909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.394 [2024-11-26 19:31:39.063919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.394 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.073861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.073899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.073909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.073914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.073919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.395 [2024-11-26 19:31:39.073929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.083864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.083921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.083931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.083936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.083941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.395 [2024-11-26 19:31:39.083951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.093931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.093974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.093984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.093989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.093994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.395 [2024-11-26 19:31:39.094003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.103952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.104009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.104018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.104023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.104028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.395 [2024-11-26 19:31:39.104038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.113953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.113996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.114006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.114011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.114016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.395 [2024-11-26 19:31:39.114026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.123988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.124047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.124056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.124061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.124066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.395 [2024-11-26 19:31:39.124076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.134030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.134072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.134084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.134089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.134094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:25:05.395 [2024-11-26 19:31:39.134108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.144076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.144126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.144145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.144152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.144157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6024000b90 00:25:05.395 [2024-11-26 19:31:39.144172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.154074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.395 [2024-11-26 19:31:39.154121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.395 [2024-11-26 19:31:39.154132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.395 [2024-11-26 19:31:39.154137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.395 [2024-11-26 19:31:39.154142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6024000b90 00:25:05.395 [2024-11-26 19:31:39.154154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:05.395 qpair failed and we were unable to recover it. 00:25:05.395 [2024-11-26 19:31:39.154287] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:25:05.395 A controller has encountered a failure and is being reset. 00:25:05.395 [2024-11-26 19:31:39.154409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1657030 (9): Bad file descriptor 00:25:05.395 Controller properly reset. 00:25:05.395 Initializing NVMe Controllers 00:25:05.395 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:05.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:05.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:05.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:05.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:05.395 Initialization complete. Launching workers. 00:25:05.395 Starting thread on core 1 00:25:05.396 Starting thread on core 2 00:25:05.396 Starting thread on core 3 00:25:05.396 Starting thread on core 0 00:25:05.396 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:05.396 00:25:05.396 real 0m11.292s 00:25:05.396 user 0m21.612s 00:25:05.396 sys 0m3.566s 00:25:05.396 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:05.396 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:05.396 ************************************ 00:25:05.396 END TEST nvmf_target_disconnect_tc2 00:25:05.396 ************************************ 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:05.655 rmmod nvme_tcp 00:25:05.655 rmmod nvme_fabrics 00:25:05.655 rmmod nvme_keyring 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3904860 ']' 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3904860 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3904860 ']' 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3904860 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3904860 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3904860' 00:25:05.655 killing process with pid 3904860 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3904860 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3904860 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.655 19:31:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.194 19:31:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:08.194 00:25:08.194 real 0m19.209s 00:25:08.194 user 0m48.500s 00:25:08.194 sys 0m7.971s 00:25:08.194 19:31:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.194 19:31:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:08.194 ************************************ 00:25:08.194 END TEST nvmf_target_disconnect 00:25:08.194 ************************************ 00:25:08.194 19:31:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:08.194 00:25:08.194 real 5m34.130s 00:25:08.194 user 10m22.629s 00:25:08.194 sys 1m40.752s 00:25:08.194 19:31:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:08.194 19:31:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.194 ************************************ 00:25:08.194 END TEST nvmf_host 00:25:08.194 ************************************ 00:25:08.194 19:31:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:08.194 19:31:41 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:25:08.194 19:31:41 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:08.194 19:31:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:08.194 19:31:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.194 19:31:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.194 ************************************ 00:25:08.194 START TEST nvmf_target_core_interrupt_mode 00:25:08.194 ************************************ 00:25:08.194 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:08.194 * Looking for test storage... 00:25:08.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.195 --rc genhtml_branch_coverage=1 00:25:08.195 --rc genhtml_function_coverage=1 00:25:08.195 --rc genhtml_legend=1 00:25:08.195 --rc geninfo_all_blocks=1 00:25:08.195 --rc geninfo_unexecuted_blocks=1 00:25:08.195 00:25:08.195 ' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.195 --rc genhtml_branch_coverage=1 00:25:08.195 --rc genhtml_function_coverage=1 00:25:08.195 --rc genhtml_legend=1 00:25:08.195 --rc geninfo_all_blocks=1 00:25:08.195 --rc geninfo_unexecuted_blocks=1 00:25:08.195 00:25:08.195 ' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.195 --rc genhtml_branch_coverage=1 00:25:08.195 --rc genhtml_function_coverage=1 00:25:08.195 --rc genhtml_legend=1 00:25:08.195 --rc geninfo_all_blocks=1 00:25:08.195 --rc geninfo_unexecuted_blocks=1 00:25:08.195 00:25:08.195 ' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:08.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.195 --rc genhtml_branch_coverage=1 00:25:08.195 --rc genhtml_function_coverage=1 00:25:08.195 --rc genhtml_legend=1 00:25:08.195 --rc geninfo_all_blocks=1 00:25:08.195 --rc geninfo_unexecuted_blocks=1 00:25:08.195 00:25:08.195 ' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.195 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:08.196 ************************************ 00:25:08.196 START TEST nvmf_abort 00:25:08.196 ************************************ 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:08.196 * Looking for test storage... 00:25:08.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.196 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:08.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.197 --rc genhtml_branch_coverage=1 00:25:08.197 --rc genhtml_function_coverage=1 00:25:08.197 --rc genhtml_legend=1 00:25:08.197 --rc geninfo_all_blocks=1 00:25:08.197 --rc geninfo_unexecuted_blocks=1 00:25:08.197 00:25:08.197 ' 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:08.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.197 --rc genhtml_branch_coverage=1 00:25:08.197 --rc genhtml_function_coverage=1 00:25:08.197 --rc genhtml_legend=1 00:25:08.197 --rc geninfo_all_blocks=1 00:25:08.197 --rc geninfo_unexecuted_blocks=1 00:25:08.197 00:25:08.197 ' 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:08.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.197 --rc genhtml_branch_coverage=1 00:25:08.197 --rc genhtml_function_coverage=1 00:25:08.197 --rc genhtml_legend=1 00:25:08.197 --rc geninfo_all_blocks=1 00:25:08.197 --rc geninfo_unexecuted_blocks=1 00:25:08.197 00:25:08.197 ' 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:08.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.197 --rc genhtml_branch_coverage=1 00:25:08.197 --rc genhtml_function_coverage=1 00:25:08.197 --rc genhtml_legend=1 00:25:08.197 --rc geninfo_all_blocks=1 00:25:08.197 --rc geninfo_unexecuted_blocks=1 00:25:08.197 00:25:08.197 ' 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.197 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:25:08.198 19:31:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.532 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:13.533 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:13.533 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:13.533 Found net devices under 0000:31:00.0: cvl_0_0 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:13.533 Found net devices under 0000:31:00.1: cvl_0_1 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.533 19:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.533 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.533 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.507 ms 00:25:13.533 00:25:13.533 --- 10.0.0.2 ping statistics --- 00:25:13.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.533 rtt min/avg/max/mdev = 0.507/0.507/0.507/0.000 ms 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:25:13.533 00:25:13.533 --- 10.0.0.1 ping statistics --- 00:25:13.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.533 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3910624 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3910624 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3910624 ']' 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:13.533 19:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:13.533 [2024-11-26 19:31:47.292147] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:13.533 [2024-11-26 19:31:47.293113] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:25:13.533 [2024-11-26 19:31:47.293148] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.533 [2024-11-26 19:31:47.378066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:13.795 [2024-11-26 19:31:47.412965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.795 [2024-11-26 19:31:47.412998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.795 [2024-11-26 19:31:47.413006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.795 [2024-11-26 19:31:47.413012] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.795 [2024-11-26 19:31:47.413018] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.795 [2024-11-26 19:31:47.414369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.795 [2024-11-26 19:31:47.414524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.795 [2024-11-26 19:31:47.414526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.795 [2024-11-26 19:31:47.470237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:13.795 [2024-11-26 19:31:47.471122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:13.795 [2024-11-26 19:31:47.471644] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:13.795 [2024-11-26 19:31:47.471680] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.363 [2024-11-26 19:31:48.103309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.363 Malloc0 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.363 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.363 Delay0 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.364 [2024-11-26 19:31:48.175083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.364 19:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:25:14.624 [2024-11-26 19:31:48.275754] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:25:16.531 Initializing NVMe Controllers 00:25:16.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:25:16.531 controller IO queue size 128 less than required 00:25:16.531 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:25:16.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:16.531 Initialization complete. Launching workers. 00:25:16.531 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28805 00:25:16.531 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28862, failed to submit 66 00:25:16.531 success 28805, unsuccessful 57, failed 0 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:16.531 rmmod nvme_tcp 00:25:16.531 rmmod nvme_fabrics 00:25:16.531 rmmod nvme_keyring 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3910624 ']' 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3910624 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3910624 ']' 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3910624 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.531 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3910624 00:25:16.790 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3910624' 00:25:16.791 killing process with pid 3910624 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3910624 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3910624 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.791 19:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:19.330 00:25:19.330 real 0m10.835s 00:25:19.330 user 0m9.889s 00:25:19.330 sys 0m5.020s 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:25:19.330 ************************************ 00:25:19.330 END TEST nvmf_abort 00:25:19.330 ************************************ 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:19.330 ************************************ 00:25:19.330 START TEST nvmf_ns_hotplug_stress 00:25:19.330 ************************************ 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:25:19.330 * Looking for test storage... 00:25:19.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:25:19.330 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:19.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.331 --rc genhtml_branch_coverage=1 00:25:19.331 --rc genhtml_function_coverage=1 00:25:19.331 --rc genhtml_legend=1 00:25:19.331 --rc geninfo_all_blocks=1 00:25:19.331 --rc geninfo_unexecuted_blocks=1 00:25:19.331 00:25:19.331 ' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:19.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.331 --rc genhtml_branch_coverage=1 00:25:19.331 --rc genhtml_function_coverage=1 00:25:19.331 --rc genhtml_legend=1 00:25:19.331 --rc geninfo_all_blocks=1 00:25:19.331 --rc geninfo_unexecuted_blocks=1 00:25:19.331 00:25:19.331 ' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:19.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.331 --rc genhtml_branch_coverage=1 00:25:19.331 --rc genhtml_function_coverage=1 00:25:19.331 --rc genhtml_legend=1 00:25:19.331 --rc geninfo_all_blocks=1 00:25:19.331 --rc geninfo_unexecuted_blocks=1 00:25:19.331 00:25:19.331 ' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:19.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.331 --rc genhtml_branch_coverage=1 00:25:19.331 --rc genhtml_function_coverage=1 00:25:19.331 --rc genhtml_legend=1 00:25:19.331 --rc geninfo_all_blocks=1 00:25:19.331 --rc geninfo_unexecuted_blocks=1 00:25:19.331 00:25:19.331 ' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:25:19.331 19:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:24.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:24.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:24.612 Found net devices under 0000:31:00.0: cvl_0_0 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:24.612 Found net devices under 0000:31:00.1: cvl_0_1 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.612 19:31:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:24.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:25:24.612 00:25:24.612 --- 10.0.0.2 ping statistics --- 00:25:24.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.612 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:25:24.612 00:25:24.612 --- 10.0.0.1 ping statistics --- 00:25:24.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.612 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3915648 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3915648 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3915648 ']' 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:24.612 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.613 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:24.613 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:24.613 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:25:24.613 [2024-11-26 19:31:58.165285] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:25:24.613 [2024-11-26 19:31:58.166270] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:25:24.613 [2024-11-26 19:31:58.166308] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.613 [2024-11-26 19:31:58.250628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:24.613 [2024-11-26 19:31:58.286522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.613 [2024-11-26 19:31:58.286552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.613 [2024-11-26 19:31:58.286560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.613 [2024-11-26 19:31:58.286567] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.613 [2024-11-26 19:31:58.286572] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.613 [2024-11-26 19:31:58.287811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.613 [2024-11-26 19:31:58.287999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.613 [2024-11-26 19:31:58.288000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.613 [2024-11-26 19:31:58.344037] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:25:24.613 [2024-11-26 19:31:58.344933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:25:24.613 [2024-11-26 19:31:58.345192] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:25:24.613 [2024-11-26 19:31:58.345238] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:25:25.182 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.182 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:25:25.182 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:25.182 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.182 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:25:25.182 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.182 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:25:25.182 19:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:25.443 [2024-11-26 19:31:59.108758] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.443 19:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:25:25.443 19:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.703 [2024-11-26 19:31:59.461441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.703 19:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:25.963 19:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:25:26.223 Malloc0 00:25:26.223 19:31:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:25:26.223 Delay0 00:25:26.223 19:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:26.483 19:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:25:26.743 NULL1 00:25:26.743 19:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:25:26.743 19:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3916125 00:25:26.743 19:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:26.743 19:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:25:26.743 19:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:28.124 Read completed with error (sct=0, sc=11) 00:25:28.124 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:28.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.124 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:28.124 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:25:28.124 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:25:28.124 true 00:25:28.383 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:28.383 19:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:29.321 19:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:29.321 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:25:29.321 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:25:29.321 true 00:25:29.321 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:29.321 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:29.580 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:29.839 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:25:29.839 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:25:29.839 true 00:25:29.839 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:29.840 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:30.099 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:30.099 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:25:30.099 19:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:25:30.360 true 00:25:30.360 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:30.360 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:30.621 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:30.621 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:25:30.621 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:25:30.880 true 00:25:30.880 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:30.881 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:30.881 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:31.140 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:25:31.140 19:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:25:31.400 true 00:25:31.400 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:31.400 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:31.400 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:31.659 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:25:31.659 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:25:31.659 true 00:25:31.920 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:31.920 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:31.920 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:32.179 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:25:32.179 19:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:25:32.179 true 00:25:32.179 19:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:32.179 19:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:33.119 19:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:33.380 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:25:33.380 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:25:33.640 true 00:25:33.640 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:33.640 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:33.640 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:33.898 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:25:33.898 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:25:33.898 true 00:25:33.898 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:33.898 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.157 19:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:34.416 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:25:34.416 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:25:34.416 true 00:25:34.416 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:34.416 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:34.676 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:34.676 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:25:34.677 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:25:34.936 true 00:25:34.936 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:34.936 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:35.196 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:35.196 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:25:35.196 19:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:25:35.456 true 00:25:35.456 19:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:35.456 19:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:36.397 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:36.397 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:36.397 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:25:36.397 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:25:36.658 true 00:25:36.658 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:36.658 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:36.918 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:36.918 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:25:36.918 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:25:37.178 true 00:25:37.178 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:37.178 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:37.178 19:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:37.439 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:25:37.439 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:25:37.439 true 00:25:37.699 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:37.699 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:37.699 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:37.958 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:25:37.958 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:25:37.958 true 00:25:37.958 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:37.958 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:38.217 19:32:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:38.476 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:25:38.476 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:25:38.476 true 00:25:38.476 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:38.476 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:38.735 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:38.735 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:25:38.735 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:25:38.994 true 00:25:38.994 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:38.994 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:39.254 19:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:39.254 19:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:25:39.254 19:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:25:39.514 true 00:25:39.514 19:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:39.514 19:32:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:40.454 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:40.454 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:40.454 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:25:40.454 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:25:40.713 true 00:25:40.713 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:40.713 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:40.713 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:40.973 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:25:40.973 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:25:41.232 true 00:25:41.232 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:41.232 19:32:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:41.232 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:41.492 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:25:41.492 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:25:41.492 true 00:25:41.492 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:41.492 19:32:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:42.432 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:42.433 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:42.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:25:42.693 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:25:42.693 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:25:42.953 true 00:25:42.953 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:42.953 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:42.953 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:43.212 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:25:43.212 19:32:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:25:43.212 true 00:25:43.212 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:43.212 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:43.472 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:43.733 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:25:43.733 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:25:43.733 true 00:25:43.733 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:43.733 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:43.994 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:43.994 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:25:44.254 19:32:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:25:44.254 true 00:25:44.254 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:44.254 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:44.514 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:44.514 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:25:44.514 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:25:44.773 true 00:25:44.773 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:44.773 19:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:45.709 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:45.709 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:25:45.709 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:25:45.967 true 00:25:45.967 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:45.967 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:46.225 19:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:46.225 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:25:46.225 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:25:46.483 true 00:25:46.483 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:46.483 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:46.483 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:46.810 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:25:46.811 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:25:46.811 true 00:25:46.811 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:46.811 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:47.099 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:47.359 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:25:47.359 19:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:25:47.359 true 00:25:47.359 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:47.359 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:47.617 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:47.617 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:25:47.617 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:25:47.877 true 00:25:47.877 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:47.877 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:48.137 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:48.137 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:25:48.137 19:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:25:48.397 true 00:25:48.397 19:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:48.397 19:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:48.397 19:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:48.657 19:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:25:48.657 19:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:25:48.657 true 00:25:48.918 19:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:48.918 19:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:49.859 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:49.859 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:25:49.859 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:25:49.859 true 00:25:50.118 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:50.118 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:50.118 19:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:50.377 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:25:50.377 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:25:50.377 true 00:25:50.377 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:50.377 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:50.638 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:50.897 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:25:50.898 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:25:50.898 true 00:25:50.898 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:50.898 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:51.158 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:51.158 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:25:51.158 19:32:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:25:51.417 true 00:25:51.417 19:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:51.417 19:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:51.677 19:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:51.677 19:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:25:51.677 19:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:25:51.936 true 00:25:51.936 19:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:51.936 19:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:52.875 19:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:52.876 19:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:25:52.876 19:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:25:53.136 true 00:25:53.136 19:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:53.136 19:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:53.136 19:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.395 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:25:53.395 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:25:53.655 true 00:25:53.655 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:53.655 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:53.655 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:53.915 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:25:53.915 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:25:53.915 true 00:25:54.175 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:54.175 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:54.175 19:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:54.434 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:25:54.434 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:25:54.434 true 00:25:54.434 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:54.434 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:54.694 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:54.954 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:25:54.954 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:25:54.954 true 00:25:54.954 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:54.954 19:32:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:55.892 19:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:56.152 19:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:25:56.152 19:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:25:56.411 true 00:25:56.411 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:56.411 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:56.411 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:56.671 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:25:56.671 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:25:56.671 true 00:25:56.671 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:56.671 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:56.931 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:57.192 Initializing NVMe Controllers 00:25:57.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:57.192 Controller IO queue size 128, less than required. 00:25:57.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.192 Controller IO queue size 128, less than required. 00:25:57.192 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:57.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:57.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:57.192 Initialization complete. Launching workers. 00:25:57.192 ======================================================== 00:25:57.192 Latency(us) 00:25:57.192 Device Information : IOPS MiB/s Average min max 00:25:57.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 341.68 0.17 134739.15 1567.14 1021526.52 00:25:57.192 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10797.46 5.27 11855.39 1126.69 332807.65 00:25:57.192 ======================================================== 00:25:57.192 Total : 11139.15 5.44 15624.74 1126.69 1021526.52 00:25:57.192 00:25:57.192 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:25:57.192 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:25:57.192 true 00:25:57.192 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3916125 00:25:57.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3916125) - No such process 00:25:57.192 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3916125 00:25:57.192 19:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:57.451 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:57.451 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:25:57.451 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:25:57.451 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:25:57.451 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:57.451 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:25:57.711 null0 00:25:57.711 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:57.711 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:57.711 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:25:57.971 null1 00:25:57.971 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:57.971 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:57.971 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:25:57.971 null2 00:25:57.971 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:57.971 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:57.971 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:25:58.230 null3 00:25:58.230 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:58.230 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:58.230 19:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:25:58.230 null4 00:25:58.230 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:58.230 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:58.230 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:25:58.489 null5 00:25:58.489 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:58.489 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:58.489 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:25:58.749 null6 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:25:58.749 null7 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:58.749 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3923139 3923140 3923142 3923143 3923145 3923147 3923149 3923151 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:58.750 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.010 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.269 19:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:59.269 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:59.269 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:59.269 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:59.269 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:59.269 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:59.269 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:59.269 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:25:59.536 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:25:59.798 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.058 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.318 19:32:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.318 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:00.579 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.838 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:00.839 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:00.839 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:00.839 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:00.839 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:00.839 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:00.839 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.098 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:01.358 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:01.358 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.358 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.358 19:32:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.358 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:01.619 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:01.620 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:01.620 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:01.620 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.620 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.620 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:01.620 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:01.620 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:01.879 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.139 19:32:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.139 rmmod nvme_tcp 00:26:02.398 rmmod nvme_fabrics 00:26:02.398 rmmod nvme_keyring 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3915648 ']' 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3915648 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3915648 ']' 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3915648 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3915648 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3915648' 00:26:02.398 killing process with pid 3915648 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3915648 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3915648 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.398 19:32:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.932 00:26:04.932 real 0m45.598s 00:26:04.932 user 2m54.077s 00:26:04.932 sys 0m17.101s 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:04.932 ************************************ 00:26:04.932 END TEST nvmf_ns_hotplug_stress 00:26:04.932 ************************************ 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:04.932 ************************************ 00:26:04.932 START TEST nvmf_delete_subsystem 00:26:04.932 ************************************ 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:04.932 * Looking for test storage... 00:26:04.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:04.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.932 --rc genhtml_branch_coverage=1 00:26:04.932 --rc genhtml_function_coverage=1 00:26:04.932 --rc genhtml_legend=1 00:26:04.932 --rc geninfo_all_blocks=1 00:26:04.932 --rc geninfo_unexecuted_blocks=1 00:26:04.932 00:26:04.932 ' 00:26:04.932 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:04.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.932 --rc genhtml_branch_coverage=1 00:26:04.932 --rc genhtml_function_coverage=1 00:26:04.933 --rc genhtml_legend=1 00:26:04.933 --rc geninfo_all_blocks=1 00:26:04.933 --rc geninfo_unexecuted_blocks=1 00:26:04.933 00:26:04.933 ' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:04.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.933 --rc genhtml_branch_coverage=1 00:26:04.933 --rc genhtml_function_coverage=1 00:26:04.933 --rc genhtml_legend=1 00:26:04.933 --rc geninfo_all_blocks=1 00:26:04.933 --rc geninfo_unexecuted_blocks=1 00:26:04.933 00:26:04.933 ' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:04.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:04.933 --rc genhtml_branch_coverage=1 00:26:04.933 --rc genhtml_function_coverage=1 00:26:04.933 --rc genhtml_legend=1 00:26:04.933 --rc geninfo_all_blocks=1 00:26:04.933 --rc geninfo_unexecuted_blocks=1 00:26:04.933 00:26:04.933 ' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.933 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:04.934 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:04.934 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:26:04.934 19:32:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:10.213 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:10.214 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:10.214 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:10.214 Found net devices under 0000:31:00.0: cvl_0_0 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:10.214 Found net devices under 0000:31:00.1: cvl_0_1 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:10.214 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:10.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:10.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:26:10.214 00:26:10.214 --- 10.0.0.2 ping statistics --- 00:26:10.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.214 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:10.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:10.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:26:10.215 00:26:10.215 --- 10.0.0.1 ping statistics --- 00:26:10.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:10.215 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3928352 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3928352 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3928352 ']' 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.215 19:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.215 [2024-11-26 19:32:43.765751] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:10.215 [2024-11-26 19:32:43.766760] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:26:10.215 [2024-11-26 19:32:43.766799] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.215 [2024-11-26 19:32:43.852965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:10.215 [2024-11-26 19:32:43.897712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:10.215 [2024-11-26 19:32:43.897761] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:10.215 [2024-11-26 19:32:43.897770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:10.215 [2024-11-26 19:32:43.897777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:10.215 [2024-11-26 19:32:43.897782] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:10.215 [2024-11-26 19:32:43.899368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.215 [2024-11-26 19:32:43.899483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.215 [2024-11-26 19:32:43.970794] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:10.215 [2024-11-26 19:32:43.971406] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:10.215 [2024-11-26 19:32:43.971719] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.784 [2024-11-26 19:32:44.576336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.784 [2024-11-26 19:32:44.596812] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.784 NULL1 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.784 Delay0 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3928655 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:10.784 19:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:11.043 [2024-11-26 19:32:44.664244] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:12.948 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.948 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.948 19:32:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 starting I/O failed: -6 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 [2024-11-26 19:32:46.711694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fedcc000c40 is same with the state(6) to be set 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.948 Write completed with error (sct=0, sc=8) 00:26:12.948 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 starting I/O failed: -6 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 [2024-11-26 19:32:46.717022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64ff00 is same with the state(6) to be set 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:12.949 Write completed with error (sct=0, sc=8) 00:26:12.949 Read completed with error (sct=0, sc=8) 00:26:13.884 [2024-11-26 19:32:47.680290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6515f0 is same with the state(6) to be set 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 [2024-11-26 19:32:47.714995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fedcc00d020 is same with the state(6) to be set 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 [2024-11-26 19:32:47.715329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fedcc00d7c0 is same with the state(6) to be set 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 [2024-11-26 19:32:47.719751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6504a0 is same with the state(6) to be set 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Read completed with error (sct=0, sc=8) 00:26:13.884 Write completed with error (sct=0, sc=8) 00:26:13.884 [2024-11-26 19:32:47.719965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6500e0 is same with the state(6) to be set 00:26:13.884 Initializing NVMe Controllers 00:26:13.884 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.884 Controller IO queue size 128, less than required. 00:26:13.885 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:13.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:13.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:13.885 Initialization complete. Launching workers. 00:26:13.885 ======================================================== 00:26:13.885 Latency(us) 00:26:13.885 Device Information : IOPS MiB/s Average min max 00:26:13.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 152.58 0.07 955802.22 195.97 2001726.19 00:26:13.885 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.52 0.08 910558.07 223.78 1005564.65 00:26:13.885 ======================================================== 00:26:13.885 Total : 314.10 0.15 932535.85 195.97 2001726.19 00:26:13.885 00:26:13.885 [2024-11-26 19:32:47.720494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6515f0 (9): Bad file descriptor 00:26:13.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:13.885 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.885 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:13.885 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3928655 00:26:13.885 19:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:26:14.453 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:26:14.453 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3928655 00:26:14.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3928655) - No such process 00:26:14.453 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3928655 00:26:14.453 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:26:14.453 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3928655 00:26:14.453 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:26:14.453 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3928655 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 [2024-11-26 19:32:48.240653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3929334 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3929334 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:14.454 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:14.454 [2024-11-26 19:32:48.291666] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:15.022 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:15.022 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3929334 00:26:15.022 19:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:15.590 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:15.590 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3929334 00:26:15.590 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:16.157 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:16.157 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3929334 00:26:16.157 19:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:16.415 19:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:16.416 19:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3929334 00:26:16.416 19:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:16.983 19:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:16.983 19:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3929334 00:26:16.983 19:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:17.550 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:17.551 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3929334 00:26:17.551 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:17.551 Initializing NVMe Controllers 00:26:17.551 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:17.551 Controller IO queue size 128, less than required. 00:26:17.551 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:17.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:17.551 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:17.551 Initialization complete. Launching workers. 00:26:17.551 ======================================================== 00:26:17.551 Latency(us) 00:26:17.551 Device Information : IOPS MiB/s Average min max 00:26:17.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004328.49 1000317.79 1009784.89 00:26:17.551 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002506.29 1000120.05 1007170.62 00:26:17.551 ======================================================== 00:26:17.551 Total : 256.00 0.12 1003417.39 1000120.05 1009784.89 00:26:17.551 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3929334 00:26:18.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3929334) - No such process 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3929334 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:18.119 rmmod nvme_tcp 00:26:18.119 rmmod nvme_fabrics 00:26:18.119 rmmod nvme_keyring 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3928352 ']' 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3928352 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3928352 ']' 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3928352 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3928352 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3928352' 00:26:18.119 killing process with pid 3928352 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3928352 00:26:18.119 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3928352 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.378 19:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:20.364 00:26:20.364 real 0m15.738s 00:26:20.364 user 0m25.235s 00:26:20.364 sys 0m5.627s 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:20.364 ************************************ 00:26:20.364 END TEST nvmf_delete_subsystem 00:26:20.364 ************************************ 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:20.364 ************************************ 00:26:20.364 START TEST nvmf_host_management 00:26:20.364 ************************************ 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:26:20.364 * Looking for test storage... 00:26:20.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:20.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.364 --rc genhtml_branch_coverage=1 00:26:20.364 --rc genhtml_function_coverage=1 00:26:20.364 --rc genhtml_legend=1 00:26:20.364 --rc geninfo_all_blocks=1 00:26:20.364 --rc geninfo_unexecuted_blocks=1 00:26:20.364 00:26:20.364 ' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:20.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.364 --rc genhtml_branch_coverage=1 00:26:20.364 --rc genhtml_function_coverage=1 00:26:20.364 --rc genhtml_legend=1 00:26:20.364 --rc geninfo_all_blocks=1 00:26:20.364 --rc geninfo_unexecuted_blocks=1 00:26:20.364 00:26:20.364 ' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:20.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.364 --rc genhtml_branch_coverage=1 00:26:20.364 --rc genhtml_function_coverage=1 00:26:20.364 --rc genhtml_legend=1 00:26:20.364 --rc geninfo_all_blocks=1 00:26:20.364 --rc geninfo_unexecuted_blocks=1 00:26:20.364 00:26:20.364 ' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:20.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.364 --rc genhtml_branch_coverage=1 00:26:20.364 --rc genhtml_function_coverage=1 00:26:20.364 --rc genhtml_legend=1 00:26:20.364 --rc geninfo_all_blocks=1 00:26:20.364 --rc geninfo_unexecuted_blocks=1 00:26:20.364 00:26:20.364 ' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.364 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.365 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.623 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:20.624 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:20.624 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:26:20.624 19:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:25.894 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:25.894 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:25.894 Found net devices under 0000:31:00.0: cvl_0_0 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.894 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:25.895 Found net devices under 0000:31:00.1: cvl_0_1 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:26:25.895 00:26:25.895 --- 10.0.0.2 ping statistics --- 00:26:25.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.895 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:26:25.895 00:26:25.895 --- 10.0.0.1 ping statistics --- 00:26:25.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.895 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3934577 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3934577 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3934577 ']' 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:25.895 19:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:26:25.895 [2024-11-26 19:32:59.629281] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:25.895 [2024-11-26 19:32:59.630428] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:26:25.895 [2024-11-26 19:32:59.630481] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.895 [2024-11-26 19:32:59.723672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.155 [2024-11-26 19:32:59.777176] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.155 [2024-11-26 19:32:59.777227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.155 [2024-11-26 19:32:59.777236] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.155 [2024-11-26 19:32:59.777243] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.155 [2024-11-26 19:32:59.777250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.155 [2024-11-26 19:32:59.779257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.155 [2024-11-26 19:32:59.779422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.155 [2024-11-26 19:32:59.779585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.155 [2024-11-26 19:32:59.779585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.155 [2024-11-26 19:32:59.839330] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:26.155 [2024-11-26 19:32:59.840500] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:26:26.155 [2024-11-26 19:32:59.840501] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:26.155 [2024-11-26 19:32:59.840661] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:26.155 [2024-11-26 19:32:59.840684] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:26.725 [2024-11-26 19:33:00.440418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:26.725 Malloc0 00:26:26.725 [2024-11-26 19:33:00.520240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3934750 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3934750 /var/tmp/bdevperf.sock 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3934750 ']' 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:26.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:26.725 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:26.726 { 00:26:26.726 "params": { 00:26:26.726 "name": "Nvme$subsystem", 00:26:26.726 "trtype": "$TEST_TRANSPORT", 00:26:26.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:26.726 "adrfam": "ipv4", 00:26:26.726 "trsvcid": "$NVMF_PORT", 00:26:26.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:26.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:26.726 "hdgst": ${hdgst:-false}, 00:26:26.726 "ddgst": ${ddgst:-false} 00:26:26.726 }, 00:26:26.726 "method": "bdev_nvme_attach_controller" 00:26:26.726 } 00:26:26.726 EOF 00:26:26.726 )") 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:26.726 19:33:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:26.726 "params": { 00:26:26.726 "name": "Nvme0", 00:26:26.726 "trtype": "tcp", 00:26:26.726 "traddr": "10.0.0.2", 00:26:26.726 "adrfam": "ipv4", 00:26:26.726 "trsvcid": "4420", 00:26:26.726 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:26.726 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:26.726 "hdgst": false, 00:26:26.726 "ddgst": false 00:26:26.726 }, 00:26:26.726 "method": "bdev_nvme_attach_controller" 00:26:26.726 }' 00:26:26.984 [2024-11-26 19:33:00.595230] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:26:26.984 [2024-11-26 19:33:00.595284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934750 ] 00:26:26.984 [2024-11-26 19:33:00.673160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.984 [2024-11-26 19:33:00.710358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.243 Running I/O for 10 seconds... 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=779 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 779 -ge 100 ']' 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.812 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:27.812 [2024-11-26 19:33:01.448125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925800 is same with the state(6) to be set 00:26:27.812 [2024-11-26 19:33:01.448181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x925800 is same with the state(6) to be set 00:26:27.812 [2024-11-26 19:33:01.448471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.812 [2024-11-26 19:33:01.448877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.812 [2024-11-26 19:33:01.448887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.448896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.448907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.448916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.448926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.448935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.448945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.448953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.448963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.448971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.448981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.448989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.813 [2024-11-26 19:33:01.449658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.813 [2024-11-26 19:33:01.449669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.814 [2024-11-26 19:33:01.449678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.814 [2024-11-26 19:33:01.449688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.814 [2024-11-26 19:33:01.449697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.814 [2024-11-26 19:33:01.449710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.814 [2024-11-26 19:33:01.449718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.814 [2024-11-26 19:33:01.449730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.814 [2024-11-26 19:33:01.449737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.814 [2024-11-26 19:33:01.449747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.814 [2024-11-26 19:33:01.449757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.814 [2024-11-26 19:33:01.449769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.814 [2024-11-26 19:33:01.449777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.814 [2024-11-26 19:33:01.449787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145d270 is same with the state(6) to be set 00:26:27.814 [2024-11-26 19:33:01.451063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:27.814 task offset: 110720 on job bdev=Nvme0n1 fails 00:26:27.814 00:26:27.814 Latency(us) 00:26:27.814 [2024-11-26T18:33:01.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.814 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:27.814 Job: Nvme0n1 ended in about 0.60 seconds with error 00:26:27.814 Verification LBA range: start 0x0 length 0x400 00:26:27.814 Nvme0n1 : 0.60 1409.80 88.11 107.41 0.00 41214.75 2075.31 34515.63 00:26:27.814 [2024-11-26T18:33:01.679Z] =================================================================================================================== 00:26:27.814 [2024-11-26T18:33:01.679Z] Total : 1409.80 88.11 107.41 0.00 41214.75 2075.31 34515.63 00:26:27.814 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.814 [2024-11-26 19:33:01.453319] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:27.814 [2024-11-26 19:33:01.453366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144cb10 (9): Bad file descriptor 00:26:27.814 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:26:27.814 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.814 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:27.814 [2024-11-26 19:33:01.454728] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:26:27.814 [2024-11-26 19:33:01.454831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:27.814 [2024-11-26 19:33:01.454858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:27.814 [2024-11-26 19:33:01.454875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:26:27.814 [2024-11-26 19:33:01.454883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:26:27.814 [2024-11-26 19:33:01.454892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:27.814 [2024-11-26 19:33:01.454900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x144cb10 00:26:27.814 [2024-11-26 19:33:01.454922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x144cb10 (9): Bad file descriptor 00:26:27.814 [2024-11-26 19:33:01.454936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:27.814 [2024-11-26 19:33:01.454946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:27.814 [2024-11-26 19:33:01.454957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:27.814 [2024-11-26 19:33:01.454969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:27.814 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.814 19:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:26:28.751 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3934750 00:26:28.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3934750) - No such process 00:26:28.751 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:26:28.751 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:28.752 { 00:26:28.752 "params": { 00:26:28.752 "name": "Nvme$subsystem", 00:26:28.752 "trtype": "$TEST_TRANSPORT", 00:26:28.752 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:28.752 "adrfam": "ipv4", 00:26:28.752 "trsvcid": "$NVMF_PORT", 00:26:28.752 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:28.752 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:28.752 "hdgst": ${hdgst:-false}, 00:26:28.752 "ddgst": ${ddgst:-false} 00:26:28.752 }, 00:26:28.752 "method": "bdev_nvme_attach_controller" 00:26:28.752 } 00:26:28.752 EOF 00:26:28.752 )") 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:26:28.752 19:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:28.752 "params": { 00:26:28.752 "name": "Nvme0", 00:26:28.752 "trtype": "tcp", 00:26:28.752 "traddr": "10.0.0.2", 00:26:28.752 "adrfam": "ipv4", 00:26:28.752 "trsvcid": "4420", 00:26:28.752 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:28.752 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:28.752 "hdgst": false, 00:26:28.752 "ddgst": false 00:26:28.752 }, 00:26:28.752 "method": "bdev_nvme_attach_controller" 00:26:28.752 }' 00:26:28.752 [2024-11-26 19:33:02.500383] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:26:28.752 [2024-11-26 19:33:02.500438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935255 ] 00:26:28.752 [2024-11-26 19:33:02.577572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.752 [2024-11-26 19:33:02.613235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.010 Running I/O for 1 seconds... 00:26:29.948 1472.00 IOPS, 92.00 MiB/s 00:26:29.948 Latency(us) 00:26:29.948 [2024-11-26T18:33:03.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.948 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:26:29.948 Verification LBA range: start 0x0 length 0x400 00:26:29.948 Nvme0n1 : 1.01 1520.61 95.04 0.00 0.00 41373.90 8355.84 35826.35 00:26:29.948 [2024-11-26T18:33:03.813Z] =================================================================================================================== 00:26:29.948 [2024-11-26T18:33:03.813Z] Total : 1520.61 95.04 0.00 0.00 41373.90 8355.84 35826.35 00:26:30.206 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:26:30.206 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:26:30.206 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:26:30.206 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.207 rmmod nvme_tcp 00:26:30.207 rmmod nvme_fabrics 00:26:30.207 rmmod nvme_keyring 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3934577 ']' 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3934577 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3934577 ']' 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3934577 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3934577 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3934577' 00:26:30.207 killing process with pid 3934577 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3934577 00:26:30.207 19:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3934577 00:26:30.466 [2024-11-26 19:33:04.091268] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.466 19:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:32.369 00:26:32.369 real 0m12.077s 00:26:32.369 user 0m17.458s 00:26:32.369 sys 0m5.596s 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:26:32.369 ************************************ 00:26:32.369 END TEST nvmf_host_management 00:26:32.369 ************************************ 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:32.369 ************************************ 00:26:32.369 START TEST nvmf_lvol 00:26:32.369 ************************************ 00:26:32.369 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:26:32.630 * Looking for test storage... 00:26:32.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:32.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.630 --rc genhtml_branch_coverage=1 00:26:32.630 --rc genhtml_function_coverage=1 00:26:32.630 --rc genhtml_legend=1 00:26:32.630 --rc geninfo_all_blocks=1 00:26:32.630 --rc geninfo_unexecuted_blocks=1 00:26:32.630 00:26:32.630 ' 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:32.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.630 --rc genhtml_branch_coverage=1 00:26:32.630 --rc genhtml_function_coverage=1 00:26:32.630 --rc genhtml_legend=1 00:26:32.630 --rc geninfo_all_blocks=1 00:26:32.630 --rc geninfo_unexecuted_blocks=1 00:26:32.630 00:26:32.630 ' 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:32.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.630 --rc genhtml_branch_coverage=1 00:26:32.630 --rc genhtml_function_coverage=1 00:26:32.630 --rc genhtml_legend=1 00:26:32.630 --rc geninfo_all_blocks=1 00:26:32.630 --rc geninfo_unexecuted_blocks=1 00:26:32.630 00:26:32.630 ' 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:32.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:32.630 --rc genhtml_branch_coverage=1 00:26:32.630 --rc genhtml_function_coverage=1 00:26:32.630 --rc genhtml_legend=1 00:26:32.630 --rc geninfo_all_blocks=1 00:26:32.630 --rc geninfo_unexecuted_blocks=1 00:26:32.630 00:26:32.630 ' 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.630 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:26:32.631 19:33:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:37.902 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:37.903 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:37.903 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:37.903 Found net devices under 0000:31:00.0: cvl_0_0 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:37.903 Found net devices under 0000:31:00.1: cvl_0_1 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:37.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:37.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:26:37.903 00:26:37.903 --- 10.0.0.2 ping statistics --- 00:26:37.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.903 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:26:37.903 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:37.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:37.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:26:37.903 00:26:37.903 --- 10.0.0.1 ping statistics --- 00:26:37.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:37.904 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:37.904 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3940317 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3940317 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3940317 ']' 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:38.162 19:33:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:26:38.162 [2024-11-26 19:33:11.803237] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:38.162 [2024-11-26 19:33:11.804222] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:26:38.162 [2024-11-26 19:33:11.804258] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:38.162 [2024-11-26 19:33:11.888507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:38.162 [2024-11-26 19:33:11.924908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:38.162 [2024-11-26 19:33:11.924941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:38.162 [2024-11-26 19:33:11.924949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:38.162 [2024-11-26 19:33:11.924955] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:38.163 [2024-11-26 19:33:11.924961] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:38.163 [2024-11-26 19:33:11.926366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.163 [2024-11-26 19:33:11.926521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.163 [2024-11-26 19:33:11.926522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.163 [2024-11-26 19:33:11.982906] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:38.163 [2024-11-26 19:33:11.983841] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:38.163 [2024-11-26 19:33:11.984425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:38.163 [2024-11-26 19:33:11.984437] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:38.728 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.728 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:26:38.728 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:38.728 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:38.728 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:38.985 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:38.985 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:38.985 [2024-11-26 19:33:12.743290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:38.985 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:39.243 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:26:39.243 19:33:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:39.243 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:26:39.502 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:26:39.502 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:26:39.761 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7169e7f9-5376-49ef-89a6-b55a551f47f4 00:26:39.761 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7169e7f9-5376-49ef-89a6-b55a551f47f4 lvol 20 00:26:39.761 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f82ca7a7-b363-4e55-bc56-a39495f00639 00:26:39.761 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:40.019 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f82ca7a7-b363-4e55-bc56-a39495f00639 00:26:40.277 19:33:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:40.277 [2024-11-26 19:33:14.075093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.277 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:40.534 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3940984 00:26:40.534 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:26:40.534 19:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:26:41.470 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f82ca7a7-b363-4e55-bc56-a39495f00639 MY_SNAPSHOT 00:26:41.731 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c0b28a93-e227-4320-a2a5-bc697ddcfe4c 00:26:41.731 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f82ca7a7-b363-4e55-bc56-a39495f00639 30 00:26:41.990 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c0b28a93-e227-4320-a2a5-bc697ddcfe4c MY_CLONE 00:26:41.990 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=e791e835-4afc-4c04-917a-bf7409cd8f85 00:26:41.990 19:33:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate e791e835-4afc-4c04-917a-bf7409cd8f85 00:26:42.558 19:33:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3940984 00:26:50.779 Initializing NVMe Controllers 00:26:50.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:50.779 Controller IO queue size 128, less than required. 00:26:50.779 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:50.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:26:50.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:26:50.779 Initialization complete. Launching workers. 00:26:50.779 ======================================================== 00:26:50.779 Latency(us) 00:26:50.779 Device Information : IOPS MiB/s Average min max 00:26:50.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16170.50 63.17 7918.10 1833.10 58518.74 00:26:50.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16487.10 64.40 7764.57 2119.00 60106.72 00:26:50.779 ======================================================== 00:26:50.779 Total : 32657.60 127.57 7840.59 1833.10 60106.72 00:26:50.779 00:26:50.779 19:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:51.039 19:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f82ca7a7-b363-4e55-bc56-a39495f00639 00:26:51.298 19:33:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7169e7f9-5376-49ef-89a6-b55a551f47f4 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:51.298 rmmod nvme_tcp 00:26:51.298 rmmod nvme_fabrics 00:26:51.298 rmmod nvme_keyring 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3940317 ']' 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3940317 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3940317 ']' 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3940317 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:51.298 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3940317 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3940317' 00:26:51.558 killing process with pid 3940317 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3940317 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3940317 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.558 19:33:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:54.105 00:26:54.105 real 0m21.141s 00:26:54.105 user 0m54.241s 00:26:54.105 sys 0m8.891s 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:26:54.105 ************************************ 00:26:54.105 END TEST nvmf_lvol 00:26:54.105 ************************************ 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:54.105 ************************************ 00:26:54.105 START TEST nvmf_lvs_grow 00:26:54.105 ************************************ 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:26:54.105 * Looking for test storage... 00:26:54.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.105 --rc genhtml_branch_coverage=1 00:26:54.105 --rc genhtml_function_coverage=1 00:26:54.105 --rc genhtml_legend=1 00:26:54.105 --rc geninfo_all_blocks=1 00:26:54.105 --rc geninfo_unexecuted_blocks=1 00:26:54.105 00:26:54.105 ' 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.105 --rc genhtml_branch_coverage=1 00:26:54.105 --rc genhtml_function_coverage=1 00:26:54.105 --rc genhtml_legend=1 00:26:54.105 --rc geninfo_all_blocks=1 00:26:54.105 --rc geninfo_unexecuted_blocks=1 00:26:54.105 00:26:54.105 ' 00:26:54.105 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:54.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.105 --rc genhtml_branch_coverage=1 00:26:54.106 --rc genhtml_function_coverage=1 00:26:54.106 --rc genhtml_legend=1 00:26:54.106 --rc geninfo_all_blocks=1 00:26:54.106 --rc geninfo_unexecuted_blocks=1 00:26:54.106 00:26:54.106 ' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:54.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.106 --rc genhtml_branch_coverage=1 00:26:54.106 --rc genhtml_function_coverage=1 00:26:54.106 --rc genhtml_legend=1 00:26:54.106 --rc geninfo_all_blocks=1 00:26:54.106 --rc geninfo_unexecuted_blocks=1 00:26:54.106 00:26:54.106 ' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:26:54.106 19:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:59.394 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:59.394 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:59.394 Found net devices under 0000:31:00.0: cvl_0_0 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:59.394 Found net devices under 0000:31:00.1: cvl_0_1 00:26:59.394 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:59.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:26:59.395 00:26:59.395 --- 10.0.0.2 ping statistics --- 00:26:59.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.395 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:59.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:26:59.395 00:26:59.395 --- 10.0.0.1 ping statistics --- 00:26:59.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.395 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3947649 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3947649 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3947649 ']' 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:59.395 [2024-11-26 19:33:32.783271] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:59.395 [2024-11-26 19:33:32.784260] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:26:59.395 [2024-11-26 19:33:32.784297] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.395 [2024-11-26 19:33:32.856134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.395 [2024-11-26 19:33:32.885131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.395 [2024-11-26 19:33:32.885157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.395 [2024-11-26 19:33:32.885164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.395 [2024-11-26 19:33:32.885169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.395 [2024-11-26 19:33:32.885173] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.395 [2024-11-26 19:33:32.885626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.395 [2024-11-26 19:33:32.937068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:59.395 [2024-11-26 19:33:32.937271] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.395 19:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:59.395 [2024-11-26 19:33:33.122367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:26:59.395 ************************************ 00:26:59.395 START TEST lvs_grow_clean 00:26:59.395 ************************************ 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:26:59.395 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:26:59.396 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:26:59.654 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:26:59.654 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:26:59.654 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:26:59.654 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:26:59.654 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:26:59.914 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:26:59.914 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:26:59.914 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad lvol 150 00:27:00.175 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=102afe72-e543-4bc5-a935-1eb60fefbd1c 00:27:00.175 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:00.175 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:00.175 [2024-11-26 19:33:33.962017] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:00.175 [2024-11-26 19:33:33.962197] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:00.175 true 00:27:00.175 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:00.175 19:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:00.434 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:00.434 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:00.434 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 102afe72-e543-4bc5-a935-1eb60fefbd1c 00:27:00.694 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:00.955 [2024-11-26 19:33:34.590590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3948031 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3948031 /var/tmp/bdevperf.sock 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3948031 ']' 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:00.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.955 19:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:00.955 [2024-11-26 19:33:34.795025] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:00.955 [2024-11-26 19:33:34.795079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3948031 ] 00:27:01.215 [2024-11-26 19:33:34.875206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.215 [2024-11-26 19:33:34.919847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.786 19:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.786 19:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:27:01.786 19:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:02.356 Nvme0n1 00:27:02.356 19:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:02.356 [ 00:27:02.356 { 00:27:02.356 "name": "Nvme0n1", 00:27:02.356 "aliases": [ 00:27:02.356 "102afe72-e543-4bc5-a935-1eb60fefbd1c" 00:27:02.356 ], 00:27:02.356 "product_name": "NVMe disk", 00:27:02.356 "block_size": 4096, 00:27:02.356 "num_blocks": 38912, 00:27:02.356 "uuid": "102afe72-e543-4bc5-a935-1eb60fefbd1c", 00:27:02.356 "numa_id": 0, 00:27:02.356 "assigned_rate_limits": { 00:27:02.356 "rw_ios_per_sec": 0, 00:27:02.356 "rw_mbytes_per_sec": 0, 00:27:02.356 "r_mbytes_per_sec": 0, 00:27:02.356 "w_mbytes_per_sec": 0 00:27:02.356 }, 00:27:02.356 "claimed": false, 00:27:02.356 "zoned": false, 00:27:02.356 "supported_io_types": { 00:27:02.356 "read": true, 00:27:02.356 "write": true, 00:27:02.356 "unmap": true, 00:27:02.356 "flush": true, 00:27:02.356 "reset": true, 00:27:02.356 "nvme_admin": true, 00:27:02.356 "nvme_io": true, 00:27:02.356 "nvme_io_md": false, 00:27:02.356 "write_zeroes": true, 00:27:02.356 "zcopy": false, 00:27:02.356 "get_zone_info": false, 00:27:02.356 "zone_management": false, 00:27:02.356 "zone_append": false, 00:27:02.356 "compare": true, 00:27:02.356 "compare_and_write": true, 00:27:02.356 "abort": true, 00:27:02.356 "seek_hole": false, 00:27:02.356 "seek_data": false, 00:27:02.356 "copy": true, 00:27:02.356 "nvme_iov_md": false 00:27:02.356 }, 00:27:02.356 "memory_domains": [ 00:27:02.356 { 00:27:02.356 "dma_device_id": "system", 00:27:02.356 "dma_device_type": 1 00:27:02.356 } 00:27:02.356 ], 00:27:02.356 "driver_specific": { 00:27:02.356 "nvme": [ 00:27:02.356 { 00:27:02.356 "trid": { 00:27:02.356 "trtype": "TCP", 00:27:02.356 "adrfam": "IPv4", 00:27:02.356 "traddr": "10.0.0.2", 00:27:02.356 "trsvcid": "4420", 00:27:02.356 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:02.356 }, 00:27:02.356 "ctrlr_data": { 00:27:02.356 "cntlid": 1, 00:27:02.356 "vendor_id": "0x8086", 00:27:02.356 "model_number": "SPDK bdev Controller", 00:27:02.356 "serial_number": "SPDK0", 00:27:02.356 "firmware_revision": "25.01", 00:27:02.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.356 "oacs": { 00:27:02.356 "security": 0, 00:27:02.356 "format": 0, 00:27:02.356 "firmware": 0, 00:27:02.356 "ns_manage": 0 00:27:02.356 }, 00:27:02.356 "multi_ctrlr": true, 00:27:02.356 "ana_reporting": false 00:27:02.356 }, 00:27:02.356 "vs": { 00:27:02.356 "nvme_version": "1.3" 00:27:02.356 }, 00:27:02.356 "ns_data": { 00:27:02.356 "id": 1, 00:27:02.356 "can_share": true 00:27:02.356 } 00:27:02.356 } 00:27:02.356 ], 00:27:02.356 "mp_policy": "active_passive" 00:27:02.356 } 00:27:02.356 } 00:27:02.356 ] 00:27:02.356 19:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3948373 00:27:02.356 19:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:02.356 19:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:02.356 Running I/O for 10 seconds... 00:27:03.737 Latency(us) 00:27:03.737 [2024-11-26T18:33:37.602Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.737 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:03.737 Nvme0n1 : 1.00 17721.00 69.22 0.00 0.00 0.00 0.00 0.00 00:27:03.737 [2024-11-26T18:33:37.602Z] =================================================================================================================== 00:27:03.737 [2024-11-26T18:33:37.602Z] Total : 17721.00 69.22 0.00 0.00 0.00 0.00 0.00 00:27:03.737 00:27:04.307 19:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:04.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:04.567 Nvme0n1 : 2.00 17814.00 69.59 0.00 0.00 0.00 0.00 0.00 00:27:04.567 [2024-11-26T18:33:38.432Z] =================================================================================================================== 00:27:04.567 [2024-11-26T18:33:38.432Z] Total : 17814.00 69.59 0.00 0.00 0.00 0.00 0.00 00:27:04.567 00:27:04.567 true 00:27:04.567 19:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:04.567 19:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:04.826 19:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:04.826 19:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:04.826 19:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3948373 00:27:05.396 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:05.396 Nvme0n1 : 3.00 17887.33 69.87 0.00 0.00 0.00 0.00 0.00 00:27:05.396 [2024-11-26T18:33:39.261Z] =================================================================================================================== 00:27:05.396 [2024-11-26T18:33:39.261Z] Total : 17887.33 69.87 0.00 0.00 0.00 0.00 0.00 00:27:05.396 00:27:06.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:06.776 Nvme0n1 : 4.00 18734.00 73.18 0.00 0.00 0.00 0.00 0.00 00:27:06.776 [2024-11-26T18:33:40.641Z] =================================================================================================================== 00:27:06.776 [2024-11-26T18:33:40.641Z] Total : 18734.00 73.18 0.00 0.00 0.00 0.00 0.00 00:27:06.776 00:27:07.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:07.717 Nvme0n1 : 5.00 20067.80 78.39 0.00 0.00 0.00 0.00 0.00 00:27:07.717 [2024-11-26T18:33:41.582Z] =================================================================================================================== 00:27:07.717 [2024-11-26T18:33:41.582Z] Total : 20067.80 78.39 0.00 0.00 0.00 0.00 0.00 00:27:07.717 00:27:08.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:08.656 Nvme0n1 : 6.00 20967.33 81.90 0.00 0.00 0.00 0.00 0.00 00:27:08.656 [2024-11-26T18:33:42.521Z] =================================================================================================================== 00:27:08.656 [2024-11-26T18:33:42.521Z] Total : 20967.33 81.90 0.00 0.00 0.00 0.00 0.00 00:27:08.656 00:27:09.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:09.593 Nvme0n1 : 7.00 21618.71 84.45 0.00 0.00 0.00 0.00 0.00 00:27:09.593 [2024-11-26T18:33:43.458Z] =================================================================================================================== 00:27:09.593 [2024-11-26T18:33:43.458Z] Total : 21618.71 84.45 0.00 0.00 0.00 0.00 0.00 00:27:09.593 00:27:10.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:10.534 Nvme0n1 : 8.00 22107.62 86.36 0.00 0.00 0.00 0.00 0.00 00:27:10.534 [2024-11-26T18:33:44.399Z] =================================================================================================================== 00:27:10.534 [2024-11-26T18:33:44.399Z] Total : 22107.62 86.36 0.00 0.00 0.00 0.00 0.00 00:27:10.534 00:27:11.472 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:11.472 Nvme0n1 : 9.00 22480.78 87.82 0.00 0.00 0.00 0.00 0.00 00:27:11.472 [2024-11-26T18:33:45.337Z] =================================================================================================================== 00:27:11.472 [2024-11-26T18:33:45.337Z] Total : 22480.78 87.82 0.00 0.00 0.00 0.00 0.00 00:27:11.472 00:27:12.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:12.411 Nvme0n1 : 10.00 22785.40 89.01 0.00 0.00 0.00 0.00 0.00 00:27:12.411 [2024-11-26T18:33:46.276Z] =================================================================================================================== 00:27:12.411 [2024-11-26T18:33:46.276Z] Total : 22785.40 89.01 0.00 0.00 0.00 0.00 0.00 00:27:12.411 00:27:12.411 00:27:12.411 Latency(us) 00:27:12.411 [2024-11-26T18:33:46.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:12.411 Nvme0n1 : 10.00 22791.91 89.03 0.00 0.00 5613.12 2348.37 13107.20 00:27:12.411 [2024-11-26T18:33:46.276Z] =================================================================================================================== 00:27:12.411 [2024-11-26T18:33:46.277Z] Total : 22791.91 89.03 0.00 0.00 5613.12 2348.37 13107.20 00:27:12.412 { 00:27:12.412 "results": [ 00:27:12.412 { 00:27:12.412 "job": "Nvme0n1", 00:27:12.412 "core_mask": "0x2", 00:27:12.412 "workload": "randwrite", 00:27:12.412 "status": "finished", 00:27:12.412 "queue_depth": 128, 00:27:12.412 "io_size": 4096, 00:27:12.412 "runtime": 10.002759, 00:27:12.412 "iops": 22791.91171155878, 00:27:12.412 "mibps": 89.03090512327648, 00:27:12.412 "io_failed": 0, 00:27:12.412 "io_timeout": 0, 00:27:12.412 "avg_latency_us": 5613.115110725115, 00:27:12.412 "min_latency_us": 2348.3733333333334, 00:27:12.412 "max_latency_us": 13107.2 00:27:12.412 } 00:27:12.412 ], 00:27:12.412 "core_count": 1 00:27:12.412 } 00:27:12.412 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3948031 00:27:12.412 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3948031 ']' 00:27:12.412 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3948031 00:27:12.412 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:27:12.412 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.412 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3948031 00:27:12.672 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:12.672 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:12.672 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3948031' 00:27:12.672 killing process with pid 3948031 00:27:12.672 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3948031 00:27:12.672 Received shutdown signal, test time was about 10.000000 seconds 00:27:12.672 00:27:12.672 Latency(us) 00:27:12.672 [2024-11-26T18:33:46.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.672 [2024-11-26T18:33:46.537Z] =================================================================================================================== 00:27:12.672 [2024-11-26T18:33:46.537Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.672 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3948031 00:27:12.672 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:12.933 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:12.933 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:12.933 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:13.193 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:13.193 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:27:13.193 19:33:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:13.193 [2024-11-26 19:33:47.030091] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:13.453 request: 00:27:13.453 { 00:27:13.453 "uuid": "d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad", 00:27:13.453 "method": "bdev_lvol_get_lvstores", 00:27:13.453 "req_id": 1 00:27:13.453 } 00:27:13.453 Got JSON-RPC error response 00:27:13.453 response: 00:27:13.453 { 00:27:13.453 "code": -19, 00:27:13.453 "message": "No such device" 00:27:13.453 } 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:13.453 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:13.712 aio_bdev 00:27:13.712 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 102afe72-e543-4bc5-a935-1eb60fefbd1c 00:27:13.712 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=102afe72-e543-4bc5-a935-1eb60fefbd1c 00:27:13.712 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:13.712 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:27:13.712 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:13.712 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:13.712 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:13.712 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 102afe72-e543-4bc5-a935-1eb60fefbd1c -t 2000 00:27:13.972 [ 00:27:13.972 { 00:27:13.972 "name": "102afe72-e543-4bc5-a935-1eb60fefbd1c", 00:27:13.972 "aliases": [ 00:27:13.972 "lvs/lvol" 00:27:13.972 ], 00:27:13.972 "product_name": "Logical Volume", 00:27:13.972 "block_size": 4096, 00:27:13.972 "num_blocks": 38912, 00:27:13.972 "uuid": "102afe72-e543-4bc5-a935-1eb60fefbd1c", 00:27:13.972 "assigned_rate_limits": { 00:27:13.972 "rw_ios_per_sec": 0, 00:27:13.972 "rw_mbytes_per_sec": 0, 00:27:13.972 "r_mbytes_per_sec": 0, 00:27:13.972 "w_mbytes_per_sec": 0 00:27:13.972 }, 00:27:13.972 "claimed": false, 00:27:13.972 "zoned": false, 00:27:13.972 "supported_io_types": { 00:27:13.972 "read": true, 00:27:13.972 "write": true, 00:27:13.972 "unmap": true, 00:27:13.972 "flush": false, 00:27:13.972 "reset": true, 00:27:13.972 "nvme_admin": false, 00:27:13.972 "nvme_io": false, 00:27:13.972 "nvme_io_md": false, 00:27:13.972 "write_zeroes": true, 00:27:13.972 "zcopy": false, 00:27:13.972 "get_zone_info": false, 00:27:13.972 "zone_management": false, 00:27:13.972 "zone_append": false, 00:27:13.972 "compare": false, 00:27:13.972 "compare_and_write": false, 00:27:13.972 "abort": false, 00:27:13.972 "seek_hole": true, 00:27:13.972 "seek_data": true, 00:27:13.972 "copy": false, 00:27:13.972 "nvme_iov_md": false 00:27:13.972 }, 00:27:13.972 "driver_specific": { 00:27:13.972 "lvol": { 00:27:13.972 "lvol_store_uuid": "d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad", 00:27:13.972 "base_bdev": "aio_bdev", 00:27:13.972 "thin_provision": false, 00:27:13.972 "num_allocated_clusters": 38, 00:27:13.972 "snapshot": false, 00:27:13.972 "clone": false, 00:27:13.972 "esnap_clone": false 00:27:13.972 } 00:27:13.972 } 00:27:13.972 } 00:27:13.972 ] 00:27:13.972 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:27:13.972 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:13.972 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:14.232 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:14.232 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:14.232 19:33:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:14.232 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:14.232 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 102afe72-e543-4bc5-a935-1eb60fefbd1c 00:27:14.491 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d94ba471-c4b9-4d9f-b9e8-0205cc1cb9ad 00:27:14.491 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:14.752 00:27:14.752 real 0m15.336s 00:27:14.752 user 0m15.065s 00:27:14.752 sys 0m1.191s 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:14.752 ************************************ 00:27:14.752 END TEST lvs_grow_clean 00:27:14.752 ************************************ 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:14.752 ************************************ 00:27:14.752 START TEST lvs_grow_dirty 00:27:14.752 ************************************ 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:14.752 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:15.012 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:15.012 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:15.271 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:15.271 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:15.271 19:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:15.271 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:15.271 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:15.271 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 lvol 150 00:27:15.529 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=323ae729-1818-464a-b9be-93afaec22d4e 00:27:15.529 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:15.529 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:15.529 [2024-11-26 19:33:49.338016] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:15.529 [2024-11-26 19:33:49.338189] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:15.529 true 00:27:15.529 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:15.529 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:15.787 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:15.787 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:16.047 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 323ae729-1818-464a-b9be-93afaec22d4e 00:27:16.047 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:16.307 [2024-11-26 19:33:49.962534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.307 19:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3951420 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3951420 /var/tmp/bdevperf.sock 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3951420 ']' 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:16.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.307 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:16.307 [2024-11-26 19:33:50.153526] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:16.307 [2024-11-26 19:33:50.153567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951420 ] 00:27:16.567 [2024-11-26 19:33:50.209311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.568 [2024-11-26 19:33:50.239205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.568 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.568 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:27:16.568 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:16.828 Nvme0n1 00:27:16.828 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:17.087 [ 00:27:17.087 { 00:27:17.087 "name": "Nvme0n1", 00:27:17.087 "aliases": [ 00:27:17.087 "323ae729-1818-464a-b9be-93afaec22d4e" 00:27:17.087 ], 00:27:17.087 "product_name": "NVMe disk", 00:27:17.087 "block_size": 4096, 00:27:17.087 "num_blocks": 38912, 00:27:17.087 "uuid": "323ae729-1818-464a-b9be-93afaec22d4e", 00:27:17.087 "numa_id": 0, 00:27:17.087 "assigned_rate_limits": { 00:27:17.087 "rw_ios_per_sec": 0, 00:27:17.087 "rw_mbytes_per_sec": 0, 00:27:17.087 "r_mbytes_per_sec": 0, 00:27:17.087 "w_mbytes_per_sec": 0 00:27:17.087 }, 00:27:17.087 "claimed": false, 00:27:17.087 "zoned": false, 00:27:17.087 "supported_io_types": { 00:27:17.087 "read": true, 00:27:17.087 "write": true, 00:27:17.087 "unmap": true, 00:27:17.087 "flush": true, 00:27:17.087 "reset": true, 00:27:17.087 "nvme_admin": true, 00:27:17.087 "nvme_io": true, 00:27:17.087 "nvme_io_md": false, 00:27:17.087 "write_zeroes": true, 00:27:17.087 "zcopy": false, 00:27:17.087 "get_zone_info": false, 00:27:17.087 "zone_management": false, 00:27:17.087 "zone_append": false, 00:27:17.087 "compare": true, 00:27:17.087 "compare_and_write": true, 00:27:17.087 "abort": true, 00:27:17.087 "seek_hole": false, 00:27:17.087 "seek_data": false, 00:27:17.087 "copy": true, 00:27:17.087 "nvme_iov_md": false 00:27:17.087 }, 00:27:17.087 "memory_domains": [ 00:27:17.087 { 00:27:17.087 "dma_device_id": "system", 00:27:17.087 "dma_device_type": 1 00:27:17.087 } 00:27:17.087 ], 00:27:17.087 "driver_specific": { 00:27:17.087 "nvme": [ 00:27:17.087 { 00:27:17.087 "trid": { 00:27:17.087 "trtype": "TCP", 00:27:17.087 "adrfam": "IPv4", 00:27:17.087 "traddr": "10.0.0.2", 00:27:17.087 "trsvcid": "4420", 00:27:17.087 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:17.087 }, 00:27:17.087 "ctrlr_data": { 00:27:17.087 "cntlid": 1, 00:27:17.087 "vendor_id": "0x8086", 00:27:17.087 "model_number": "SPDK bdev Controller", 00:27:17.087 "serial_number": "SPDK0", 00:27:17.087 "firmware_revision": "25.01", 00:27:17.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:17.087 "oacs": { 00:27:17.087 "security": 0, 00:27:17.087 "format": 0, 00:27:17.087 "firmware": 0, 00:27:17.087 "ns_manage": 0 00:27:17.087 }, 00:27:17.087 "multi_ctrlr": true, 00:27:17.087 "ana_reporting": false 00:27:17.087 }, 00:27:17.087 "vs": { 00:27:17.087 "nvme_version": "1.3" 00:27:17.087 }, 00:27:17.087 "ns_data": { 00:27:17.087 "id": 1, 00:27:17.087 "can_share": true 00:27:17.087 } 00:27:17.087 } 00:27:17.087 ], 00:27:17.087 "mp_policy": "active_passive" 00:27:17.087 } 00:27:17.087 } 00:27:17.087 ] 00:27:17.087 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3951569 00:27:17.087 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:17.087 19:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:17.087 Running I/O for 10 seconds... 00:27:18.465 Latency(us) 00:27:18.465 [2024-11-26T18:33:52.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.465 Nvme0n1 : 1.00 25156.00 98.27 0.00 0.00 0.00 0.00 0.00 00:27:18.465 [2024-11-26T18:33:52.330Z] =================================================================================================================== 00:27:18.465 [2024-11-26T18:33:52.330Z] Total : 25156.00 98.27 0.00 0.00 0.00 0.00 0.00 00:27:18.465 00:27:19.033 19:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:19.293 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:19.293 Nvme0n1 : 2.00 25248.50 98.63 0.00 0.00 0.00 0.00 0.00 00:27:19.293 [2024-11-26T18:33:53.158Z] =================================================================================================================== 00:27:19.293 [2024-11-26T18:33:53.158Z] Total : 25248.50 98.63 0.00 0.00 0.00 0.00 0.00 00:27:19.293 00:27:19.293 true 00:27:19.293 19:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:19.293 19:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:19.553 19:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:19.553 19:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:19.553 19:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3951569 00:27:20.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:20.121 Nvme0n1 : 3.00 25279.33 98.75 0.00 0.00 0.00 0.00 0.00 00:27:20.121 [2024-11-26T18:33:53.986Z] =================================================================================================================== 00:27:20.121 [2024-11-26T18:33:53.986Z] Total : 25279.33 98.75 0.00 0.00 0.00 0.00 0.00 00:27:20.121 00:27:21.060 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:21.060 Nvme0n1 : 4.00 25326.00 98.93 0.00 0.00 0.00 0.00 0.00 00:27:21.060 [2024-11-26T18:33:54.925Z] =================================================================================================================== 00:27:21.060 [2024-11-26T18:33:54.925Z] Total : 25326.00 98.93 0.00 0.00 0.00 0.00 0.00 00:27:21.060 00:27:22.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:22.438 Nvme0n1 : 5.00 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:27:22.438 [2024-11-26T18:33:56.303Z] =================================================================================================================== 00:27:22.438 [2024-11-26T18:33:56.303Z] Total : 25355.00 99.04 0.00 0.00 0.00 0.00 0.00 00:27:22.438 00:27:23.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:23.376 Nvme0n1 : 6.00 25384.00 99.16 0.00 0.00 0.00 0.00 0.00 00:27:23.376 [2024-11-26T18:33:57.241Z] =================================================================================================================== 00:27:23.376 [2024-11-26T18:33:57.241Z] Total : 25384.00 99.16 0.00 0.00 0.00 0.00 0.00 00:27:23.376 00:27:24.315 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:24.315 Nvme0n1 : 7.00 25400.57 99.22 0.00 0.00 0.00 0.00 0.00 00:27:24.315 [2024-11-26T18:33:58.180Z] =================================================================================================================== 00:27:24.315 [2024-11-26T18:33:58.180Z] Total : 25400.57 99.22 0.00 0.00 0.00 0.00 0.00 00:27:24.315 00:27:25.253 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:25.253 Nvme0n1 : 8.00 25416.75 99.28 0.00 0.00 0.00 0.00 0.00 00:27:25.253 [2024-11-26T18:33:59.118Z] =================================================================================================================== 00:27:25.253 [2024-11-26T18:33:59.118Z] Total : 25416.75 99.28 0.00 0.00 0.00 0.00 0.00 00:27:25.253 00:27:26.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:26.191 Nvme0n1 : 9.00 25435.00 99.36 0.00 0.00 0.00 0.00 0.00 00:27:26.191 [2024-11-26T18:34:00.056Z] =================================================================================================================== 00:27:26.191 [2024-11-26T18:34:00.056Z] Total : 25435.00 99.36 0.00 0.00 0.00 0.00 0.00 00:27:26.191 00:27:27.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:27.131 Nvme0n1 : 10.00 25444.20 99.39 0.00 0.00 0.00 0.00 0.00 00:27:27.131 [2024-11-26T18:34:00.996Z] =================================================================================================================== 00:27:27.131 [2024-11-26T18:34:00.996Z] Total : 25444.20 99.39 0.00 0.00 0.00 0.00 0.00 00:27:27.131 00:27:27.131 00:27:27.131 Latency(us) 00:27:27.131 [2024-11-26T18:34:00.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:27.131 Nvme0n1 : 10.01 25443.35 99.39 0.00 0.00 5028.17 1952.43 9229.65 00:27:27.131 [2024-11-26T18:34:00.996Z] =================================================================================================================== 00:27:27.131 [2024-11-26T18:34:00.996Z] Total : 25443.35 99.39 0.00 0.00 5028.17 1952.43 9229.65 00:27:27.131 { 00:27:27.131 "results": [ 00:27:27.131 { 00:27:27.131 "job": "Nvme0n1", 00:27:27.131 "core_mask": "0x2", 00:27:27.131 "workload": "randwrite", 00:27:27.131 "status": "finished", 00:27:27.131 "queue_depth": 128, 00:27:27.131 "io_size": 4096, 00:27:27.131 "runtime": 10.005365, 00:27:27.131 "iops": 25443.349642916575, 00:27:27.131 "mibps": 99.38808454264287, 00:27:27.131 "io_failed": 0, 00:27:27.131 "io_timeout": 0, 00:27:27.131 "avg_latency_us": 5028.168126225923, 00:27:27.131 "min_latency_us": 1952.4266666666667, 00:27:27.131 "max_latency_us": 9229.653333333334 00:27:27.131 } 00:27:27.131 ], 00:27:27.131 "core_count": 1 00:27:27.131 } 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3951420 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3951420 ']' 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3951420 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3951420 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3951420' 00:27:27.131 killing process with pid 3951420 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3951420 00:27:27.131 Received shutdown signal, test time was about 10.000000 seconds 00:27:27.131 00:27:27.131 Latency(us) 00:27:27.131 [2024-11-26T18:34:00.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:27.131 [2024-11-26T18:34:00.996Z] =================================================================================================================== 00:27:27.131 [2024-11-26T18:34:00.996Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:27.131 19:34:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3951420 00:27:27.391 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:27.391 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:27.650 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:27.650 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3947649 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3947649 00:27:27.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3947649 Killed "${NVMF_APP[@]}" "$@" 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3953917 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3953917 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3953917 ']' 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:27.910 19:34:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:27.910 [2024-11-26 19:34:01.684711] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:27.910 [2024-11-26 19:34:01.685718] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:27.910 [2024-11-26 19:34:01.685759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.910 [2024-11-26 19:34:01.759081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.170 [2024-11-26 19:34:01.789279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.170 [2024-11-26 19:34:01.789308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.170 [2024-11-26 19:34:01.789314] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.170 [2024-11-26 19:34:01.789319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.170 [2024-11-26 19:34:01.789323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.170 [2024-11-26 19:34:01.789821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.170 [2024-11-26 19:34:01.841552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:28.171 [2024-11-26 19:34:01.841740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:28.739 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.739 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:27:28.739 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:28.739 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:28.739 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:28.739 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.739 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:28.999 [2024-11-26 19:34:02.629371] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:28.999 [2024-11-26 19:34:02.629449] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:28.999 [2024-11-26 19:34:02.629474] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 323ae729-1818-464a-b9be-93afaec22d4e 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=323ae729-1818-464a-b9be-93afaec22d4e 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:28.999 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 323ae729-1818-464a-b9be-93afaec22d4e -t 2000 00:27:29.257 [ 00:27:29.257 { 00:27:29.257 "name": "323ae729-1818-464a-b9be-93afaec22d4e", 00:27:29.257 "aliases": [ 00:27:29.257 "lvs/lvol" 00:27:29.257 ], 00:27:29.257 "product_name": "Logical Volume", 00:27:29.257 "block_size": 4096, 00:27:29.257 "num_blocks": 38912, 00:27:29.257 "uuid": "323ae729-1818-464a-b9be-93afaec22d4e", 00:27:29.257 "assigned_rate_limits": { 00:27:29.257 "rw_ios_per_sec": 0, 00:27:29.257 "rw_mbytes_per_sec": 0, 00:27:29.257 "r_mbytes_per_sec": 0, 00:27:29.257 "w_mbytes_per_sec": 0 00:27:29.257 }, 00:27:29.257 "claimed": false, 00:27:29.257 "zoned": false, 00:27:29.257 "supported_io_types": { 00:27:29.257 "read": true, 00:27:29.257 "write": true, 00:27:29.257 "unmap": true, 00:27:29.257 "flush": false, 00:27:29.257 "reset": true, 00:27:29.257 "nvme_admin": false, 00:27:29.257 "nvme_io": false, 00:27:29.257 "nvme_io_md": false, 00:27:29.257 "write_zeroes": true, 00:27:29.257 "zcopy": false, 00:27:29.257 "get_zone_info": false, 00:27:29.257 "zone_management": false, 00:27:29.257 "zone_append": false, 00:27:29.257 "compare": false, 00:27:29.257 "compare_and_write": false, 00:27:29.257 "abort": false, 00:27:29.257 "seek_hole": true, 00:27:29.257 "seek_data": true, 00:27:29.257 "copy": false, 00:27:29.257 "nvme_iov_md": false 00:27:29.257 }, 00:27:29.257 "driver_specific": { 00:27:29.257 "lvol": { 00:27:29.257 "lvol_store_uuid": "85429bd6-c840-497f-8a8a-00e03ae5c3b1", 00:27:29.257 "base_bdev": "aio_bdev", 00:27:29.257 "thin_provision": false, 00:27:29.257 "num_allocated_clusters": 38, 00:27:29.257 "snapshot": false, 00:27:29.257 "clone": false, 00:27:29.257 "esnap_clone": false 00:27:29.257 } 00:27:29.257 } 00:27:29.257 } 00:27:29.257 ] 00:27:29.257 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:27:29.257 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:29.257 19:34:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:27:29.257 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:27:29.516 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:29.517 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:27:29.517 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:27:29.517 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:29.775 [2024-11-26 19:34:03.410335] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:29.775 request: 00:27:29.775 { 00:27:29.775 "uuid": "85429bd6-c840-497f-8a8a-00e03ae5c3b1", 00:27:29.775 "method": "bdev_lvol_get_lvstores", 00:27:29.775 "req_id": 1 00:27:29.775 } 00:27:29.775 Got JSON-RPC error response 00:27:29.775 response: 00:27:29.775 { 00:27:29.775 "code": -19, 00:27:29.775 "message": "No such device" 00:27:29.775 } 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.775 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:30.034 aio_bdev 00:27:30.034 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 323ae729-1818-464a-b9be-93afaec22d4e 00:27:30.034 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=323ae729-1818-464a-b9be-93afaec22d4e 00:27:30.034 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:30.034 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:27:30.034 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:30.034 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:30.034 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:30.295 19:34:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 323ae729-1818-464a-b9be-93afaec22d4e -t 2000 00:27:30.295 [ 00:27:30.295 { 00:27:30.295 "name": "323ae729-1818-464a-b9be-93afaec22d4e", 00:27:30.295 "aliases": [ 00:27:30.295 "lvs/lvol" 00:27:30.295 ], 00:27:30.295 "product_name": "Logical Volume", 00:27:30.295 "block_size": 4096, 00:27:30.295 "num_blocks": 38912, 00:27:30.295 "uuid": "323ae729-1818-464a-b9be-93afaec22d4e", 00:27:30.295 "assigned_rate_limits": { 00:27:30.295 "rw_ios_per_sec": 0, 00:27:30.295 "rw_mbytes_per_sec": 0, 00:27:30.295 "r_mbytes_per_sec": 0, 00:27:30.295 "w_mbytes_per_sec": 0 00:27:30.295 }, 00:27:30.295 "claimed": false, 00:27:30.295 "zoned": false, 00:27:30.295 "supported_io_types": { 00:27:30.295 "read": true, 00:27:30.295 "write": true, 00:27:30.295 "unmap": true, 00:27:30.295 "flush": false, 00:27:30.295 "reset": true, 00:27:30.295 "nvme_admin": false, 00:27:30.295 "nvme_io": false, 00:27:30.295 "nvme_io_md": false, 00:27:30.295 "write_zeroes": true, 00:27:30.295 "zcopy": false, 00:27:30.295 "get_zone_info": false, 00:27:30.295 "zone_management": false, 00:27:30.295 "zone_append": false, 00:27:30.295 "compare": false, 00:27:30.295 "compare_and_write": false, 00:27:30.295 "abort": false, 00:27:30.295 "seek_hole": true, 00:27:30.295 "seek_data": true, 00:27:30.295 "copy": false, 00:27:30.295 "nvme_iov_md": false 00:27:30.295 }, 00:27:30.295 "driver_specific": { 00:27:30.295 "lvol": { 00:27:30.295 "lvol_store_uuid": "85429bd6-c840-497f-8a8a-00e03ae5c3b1", 00:27:30.295 "base_bdev": "aio_bdev", 00:27:30.295 "thin_provision": false, 00:27:30.295 "num_allocated_clusters": 38, 00:27:30.295 "snapshot": false, 00:27:30.295 "clone": false, 00:27:30.295 "esnap_clone": false 00:27:30.295 } 00:27:30.295 } 00:27:30.295 } 00:27:30.295 ] 00:27:30.295 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:27:30.295 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:30.295 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:30.554 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:30.554 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:30.554 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:30.554 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:30.554 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 323ae729-1818-464a-b9be-93afaec22d4e 00:27:30.814 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85429bd6-c840-497f-8a8a-00e03ae5c3b1 00:27:31.073 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:31.073 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:31.073 00:27:31.073 real 0m16.374s 00:27:31.073 user 0m34.124s 00:27:31.073 sys 0m2.685s 00:27:31.073 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.073 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:31.073 ************************************ 00:27:31.073 END TEST lvs_grow_dirty 00:27:31.073 ************************************ 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:27:31.334 nvmf_trace.0 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:31.334 19:34:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:31.334 rmmod nvme_tcp 00:27:31.334 rmmod nvme_fabrics 00:27:31.334 rmmod nvme_keyring 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3953917 ']' 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3953917 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3953917 ']' 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3953917 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3953917 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3953917' 00:27:31.334 killing process with pid 3953917 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3953917 00:27:31.334 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3953917 00:27:31.594 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:31.594 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:31.594 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:31.594 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:27:31.594 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:27:31.594 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:27:31.594 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:31.595 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:31.595 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:31.595 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.595 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.595 19:34:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.502 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:33.502 00:27:33.502 real 0m39.879s 00:27:33.502 user 0m51.183s 00:27:33.502 sys 0m8.083s 00:27:33.502 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.502 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:33.502 ************************************ 00:27:33.502 END TEST nvmf_lvs_grow 00:27:33.502 ************************************ 00:27:33.502 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:27:33.502 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:33.502 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:33.502 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:33.502 ************************************ 00:27:33.502 START TEST nvmf_bdev_io_wait 00:27:33.502 ************************************ 00:27:33.502 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:27:33.762 * Looking for test storage... 00:27:33.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:33.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.762 --rc genhtml_branch_coverage=1 00:27:33.762 --rc genhtml_function_coverage=1 00:27:33.762 --rc genhtml_legend=1 00:27:33.762 --rc geninfo_all_blocks=1 00:27:33.762 --rc geninfo_unexecuted_blocks=1 00:27:33.762 00:27:33.762 ' 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:33.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.762 --rc genhtml_branch_coverage=1 00:27:33.762 --rc genhtml_function_coverage=1 00:27:33.762 --rc genhtml_legend=1 00:27:33.762 --rc geninfo_all_blocks=1 00:27:33.762 --rc geninfo_unexecuted_blocks=1 00:27:33.762 00:27:33.762 ' 00:27:33.762 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:33.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.762 --rc genhtml_branch_coverage=1 00:27:33.762 --rc genhtml_function_coverage=1 00:27:33.763 --rc genhtml_legend=1 00:27:33.763 --rc geninfo_all_blocks=1 00:27:33.763 --rc geninfo_unexecuted_blocks=1 00:27:33.763 00:27:33.763 ' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:33.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.763 --rc genhtml_branch_coverage=1 00:27:33.763 --rc genhtml_function_coverage=1 00:27:33.763 --rc genhtml_legend=1 00:27:33.763 --rc geninfo_all_blocks=1 00:27:33.763 --rc geninfo_unexecuted_blocks=1 00:27:33.763 00:27:33.763 ' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:27:33.763 19:34:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:39.041 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:39.041 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:39.041 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:39.042 Found net devices under 0000:31:00.0: cvl_0_0 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:39.042 Found net devices under 0000:31:00.1: cvl_0_1 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:39.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:27:39.042 00:27:39.042 --- 10.0.0.2 ping statistics --- 00:27:39.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.042 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:27:39.042 00:27:39.042 --- 10.0.0.1 ping statistics --- 00:27:39.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.042 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3959150 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3959150 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3959150 ']' 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:39.042 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.043 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:39.043 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:39.043 19:34:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:27:39.043 [2024-11-26 19:34:12.855640] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:39.043 [2024-11-26 19:34:12.856613] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:39.043 [2024-11-26 19:34:12.856649] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.300 [2024-11-26 19:34:12.940996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.300 [2024-11-26 19:34:12.978853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.300 [2024-11-26 19:34:12.978886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.300 [2024-11-26 19:34:12.978894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:39.300 [2024-11-26 19:34:12.978900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:39.300 [2024-11-26 19:34:12.978906] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.300 [2024-11-26 19:34:12.980395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.300 [2024-11-26 19:34:12.980547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.300 [2024-11-26 19:34:12.980695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.300 [2024-11-26 19:34:12.980696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.300 [2024-11-26 19:34:12.980953] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:39.866 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:39.866 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:39.867 [2024-11-26 19:34:13.725454] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:39.867 [2024-11-26 19:34:13.725822] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:39.867 [2024-11-26 19:34:13.726010] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:39.867 [2024-11-26 19:34:13.726173] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.867 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:40.128 [2024-11-26 19:34:13.733487] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:40.128 Malloc0 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:40.128 [2024-11-26 19:34:13.785380] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3959183 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3959184 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3959186 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:27:40.128 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:40.128 { 00:27:40.128 "params": { 00:27:40.128 "name": "Nvme$subsystem", 00:27:40.129 "trtype": "$TEST_TRANSPORT", 00:27:40.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.129 "adrfam": "ipv4", 00:27:40.129 "trsvcid": "$NVMF_PORT", 00:27:40.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.129 "hdgst": ${hdgst:-false}, 00:27:40.129 "ddgst": ${ddgst:-false} 00:27:40.129 }, 00:27:40.129 "method": "bdev_nvme_attach_controller" 00:27:40.129 } 00:27:40.129 EOF 00:27:40.129 )") 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3959188 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:40.129 { 00:27:40.129 "params": { 00:27:40.129 "name": "Nvme$subsystem", 00:27:40.129 "trtype": "$TEST_TRANSPORT", 00:27:40.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.129 "adrfam": "ipv4", 00:27:40.129 "trsvcid": "$NVMF_PORT", 00:27:40.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.129 "hdgst": ${hdgst:-false}, 00:27:40.129 "ddgst": ${ddgst:-false} 00:27:40.129 }, 00:27:40.129 "method": "bdev_nvme_attach_controller" 00:27:40.129 } 00:27:40.129 EOF 00:27:40.129 )") 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:40.129 { 00:27:40.129 "params": { 00:27:40.129 "name": "Nvme$subsystem", 00:27:40.129 "trtype": "$TEST_TRANSPORT", 00:27:40.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.129 "adrfam": "ipv4", 00:27:40.129 "trsvcid": "$NVMF_PORT", 00:27:40.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.129 "hdgst": ${hdgst:-false}, 00:27:40.129 "ddgst": ${ddgst:-false} 00:27:40.129 }, 00:27:40.129 "method": "bdev_nvme_attach_controller" 00:27:40.129 } 00:27:40.129 EOF 00:27:40.129 )") 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:40.129 { 00:27:40.129 "params": { 00:27:40.129 "name": "Nvme$subsystem", 00:27:40.129 "trtype": "$TEST_TRANSPORT", 00:27:40.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.129 "adrfam": "ipv4", 00:27:40.129 "trsvcid": "$NVMF_PORT", 00:27:40.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.129 "hdgst": ${hdgst:-false}, 00:27:40.129 "ddgst": ${ddgst:-false} 00:27:40.129 }, 00:27:40.129 "method": "bdev_nvme_attach_controller" 00:27:40.129 } 00:27:40.129 EOF 00:27:40.129 )") 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3959183 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:40.129 "params": { 00:27:40.129 "name": "Nvme1", 00:27:40.129 "trtype": "tcp", 00:27:40.129 "traddr": "10.0.0.2", 00:27:40.129 "adrfam": "ipv4", 00:27:40.129 "trsvcid": "4420", 00:27:40.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.129 "hdgst": false, 00:27:40.129 "ddgst": false 00:27:40.129 }, 00:27:40.129 "method": "bdev_nvme_attach_controller" 00:27:40.129 }' 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:40.129 "params": { 00:27:40.129 "name": "Nvme1", 00:27:40.129 "trtype": "tcp", 00:27:40.129 "traddr": "10.0.0.2", 00:27:40.129 "adrfam": "ipv4", 00:27:40.129 "trsvcid": "4420", 00:27:40.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.129 "hdgst": false, 00:27:40.129 "ddgst": false 00:27:40.129 }, 00:27:40.129 "method": "bdev_nvme_attach_controller" 00:27:40.129 }' 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:40.129 "params": { 00:27:40.129 "name": "Nvme1", 00:27:40.129 "trtype": "tcp", 00:27:40.129 "traddr": "10.0.0.2", 00:27:40.129 "adrfam": "ipv4", 00:27:40.129 "trsvcid": "4420", 00:27:40.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.129 "hdgst": false, 00:27:40.129 "ddgst": false 00:27:40.129 }, 00:27:40.129 "method": "bdev_nvme_attach_controller" 00:27:40.129 }' 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:27:40.129 19:34:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:40.129 "params": { 00:27:40.129 "name": "Nvme1", 00:27:40.129 "trtype": "tcp", 00:27:40.129 "traddr": "10.0.0.2", 00:27:40.129 "adrfam": "ipv4", 00:27:40.129 "trsvcid": "4420", 00:27:40.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:40.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:40.129 "hdgst": false, 00:27:40.129 "ddgst": false 00:27:40.129 }, 00:27:40.129 "method": "bdev_nvme_attach_controller" 00:27:40.129 }' 00:27:40.129 [2024-11-26 19:34:13.823293] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:40.129 [2024-11-26 19:34:13.823362] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:40.129 [2024-11-26 19:34:13.827800] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:40.129 [2024-11-26 19:34:13.827864] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:27:40.129 [2024-11-26 19:34:13.828890] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:40.129 [2024-11-26 19:34:13.828951] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:27:40.129 [2024-11-26 19:34:13.830167] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:40.129 [2024-11-26 19:34:13.830235] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:27:40.390 [2024-11-26 19:34:14.036209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.390 [2024-11-26 19:34:14.076969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:40.390 [2024-11-26 19:34:14.121328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.390 [2024-11-26 19:34:14.162956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:27:40.390 [2024-11-26 19:34:14.182539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.390 [2024-11-26 19:34:14.222456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:27:40.390 [2024-11-26 19:34:14.244504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.651 [2024-11-26 19:34:14.280520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:27:40.651 Running I/O for 1 seconds... 00:27:40.651 Running I/O for 1 seconds... 00:27:40.651 Running I/O for 1 seconds... 00:27:40.651 Running I/O for 1 seconds... 00:27:41.589 16652.00 IOPS, 65.05 MiB/s 00:27:41.589 Latency(us) 00:27:41.589 [2024-11-26T18:34:15.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.589 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:27:41.589 Nvme1n1 : 1.01 16713.00 65.29 0.00 0.00 7637.58 3713.71 11687.25 00:27:41.589 [2024-11-26T18:34:15.454Z] =================================================================================================================== 00:27:41.589 [2024-11-26T18:34:15.454Z] Total : 16713.00 65.29 0.00 0.00 7637.58 3713.71 11687.25 00:27:41.589 6928.00 IOPS, 27.06 MiB/s 00:27:41.589 Latency(us) 00:27:41.589 [2024-11-26T18:34:15.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.589 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:27:41.589 Nvme1n1 : 1.02 6954.20 27.16 0.00 0.00 18252.02 5761.71 27743.57 00:27:41.589 [2024-11-26T18:34:15.454Z] =================================================================================================================== 00:27:41.589 [2024-11-26T18:34:15.454Z] Total : 6954.20 27.16 0.00 0.00 18252.02 5761.71 27743.57 00:27:41.589 181656.00 IOPS, 709.59 MiB/s 00:27:41.589 Latency(us) 00:27:41.589 [2024-11-26T18:34:15.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.589 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:27:41.589 Nvme1n1 : 1.00 181296.87 708.19 0.00 0.00 702.13 298.67 1966.08 00:27:41.589 [2024-11-26T18:34:15.454Z] =================================================================================================================== 00:27:41.589 [2024-11-26T18:34:15.454Z] Total : 181296.87 708.19 0.00 0.00 702.13 298.67 1966.08 00:27:41.849 7115.00 IOPS, 27.79 MiB/s 00:27:41.849 Latency(us) 00:27:41.849 [2024-11-26T18:34:15.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.849 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:27:41.849 Nvme1n1 : 1.01 7234.89 28.26 0.00 0.00 17641.13 4177.92 33641.81 00:27:41.849 [2024-11-26T18:34:15.714Z] =================================================================================================================== 00:27:41.849 [2024-11-26T18:34:15.714Z] Total : 7234.89 28.26 0.00 0.00 17641.13 4177.92 33641.81 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3959184 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3959186 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3959188 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.849 rmmod nvme_tcp 00:27:41.849 rmmod nvme_fabrics 00:27:41.849 rmmod nvme_keyring 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3959150 ']' 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3959150 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3959150 ']' 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3959150 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3959150 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3959150' 00:27:41.849 killing process with pid 3959150 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3959150 00:27:41.849 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3959150 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.110 19:34:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.650 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:44.650 00:27:44.650 real 0m10.566s 00:27:44.650 user 0m14.333s 00:27:44.650 sys 0m5.836s 00:27:44.650 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:44.650 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:27:44.650 ************************************ 00:27:44.650 END TEST nvmf_bdev_io_wait 00:27:44.650 ************************************ 00:27:44.650 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:27:44.651 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:44.651 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:44.651 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:44.651 ************************************ 00:27:44.651 START TEST nvmf_queue_depth 00:27:44.651 ************************************ 00:27:44.651 19:34:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:27:44.651 * Looking for test storage... 00:27:44.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:44.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.651 --rc genhtml_branch_coverage=1 00:27:44.651 --rc genhtml_function_coverage=1 00:27:44.651 --rc genhtml_legend=1 00:27:44.651 --rc geninfo_all_blocks=1 00:27:44.651 --rc geninfo_unexecuted_blocks=1 00:27:44.651 00:27:44.651 ' 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:44.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.651 --rc genhtml_branch_coverage=1 00:27:44.651 --rc genhtml_function_coverage=1 00:27:44.651 --rc genhtml_legend=1 00:27:44.651 --rc geninfo_all_blocks=1 00:27:44.651 --rc geninfo_unexecuted_blocks=1 00:27:44.651 00:27:44.651 ' 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:44.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.651 --rc genhtml_branch_coverage=1 00:27:44.651 --rc genhtml_function_coverage=1 00:27:44.651 --rc genhtml_legend=1 00:27:44.651 --rc geninfo_all_blocks=1 00:27:44.651 --rc geninfo_unexecuted_blocks=1 00:27:44.651 00:27:44.651 ' 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:44.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.651 --rc genhtml_branch_coverage=1 00:27:44.651 --rc genhtml_function_coverage=1 00:27:44.651 --rc genhtml_legend=1 00:27:44.651 --rc geninfo_all_blocks=1 00:27:44.651 --rc geninfo_unexecuted_blocks=1 00:27:44.651 00:27:44.651 ' 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:27:44.651 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.652 19:34:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:49.931 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:49.931 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:49.931 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:49.932 Found net devices under 0000:31:00.0: cvl_0_0 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:49.932 Found net devices under 0000:31:00.1: cvl_0_1 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:49.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.524 ms 00:27:49.932 00:27:49.932 --- 10.0.0.2 ping statistics --- 00:27:49.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.932 rtt min/avg/max/mdev = 0.524/0.524/0.524/0.000 ms 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:27:49.932 00:27:49.932 --- 10.0.0.1 ping statistics --- 00:27:49.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.932 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3963951 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3963951 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3963951 ']' 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:49.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:49.932 19:34:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:27:49.932 [2024-11-26 19:34:23.503709] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:49.932 [2024-11-26 19:34:23.504699] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:49.932 [2024-11-26 19:34:23.504735] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:49.932 [2024-11-26 19:34:23.592241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.932 [2024-11-26 19:34:23.627800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:49.932 [2024-11-26 19:34:23.627833] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:49.932 [2024-11-26 19:34:23.627841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:49.932 [2024-11-26 19:34:23.627848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:49.932 [2024-11-26 19:34:23.627854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:49.932 [2024-11-26 19:34:23.628411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.932 [2024-11-26 19:34:23.684325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:49.932 [2024-11-26 19:34:23.684578] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.502 [2024-11-26 19:34:24.313157] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.502 Malloc0 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.502 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.503 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.503 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.503 [2024-11-26 19:34:24.360931] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.503 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.503 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3964227 00:27:50.762 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:50.762 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3964227 /var/tmp/bdevperf.sock 00:27:50.763 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3964227 ']' 00:27:50.763 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:50.763 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.763 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:50.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:50.763 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.763 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:27:50.763 19:34:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:50.763 [2024-11-26 19:34:24.398708] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:27:50.763 [2024-11-26 19:34:24.398756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3964227 ] 00:27:50.763 [2024-11-26 19:34:24.475582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.763 [2024-11-26 19:34:24.512765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.331 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.331 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:27:51.331 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:51.331 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.331 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:27:51.589 NVMe0n1 00:27:51.589 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.589 19:34:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:51.847 Running I/O for 10 seconds... 00:27:53.727 8650.00 IOPS, 33.79 MiB/s [2024-11-26T18:34:28.529Z] 10552.00 IOPS, 41.22 MiB/s [2024-11-26T18:34:29.906Z] 11493.00 IOPS, 44.89 MiB/s [2024-11-26T18:34:30.843Z] 12021.25 IOPS, 46.96 MiB/s [2024-11-26T18:34:31.780Z] 12290.80 IOPS, 48.01 MiB/s [2024-11-26T18:34:32.717Z] 12494.33 IOPS, 48.81 MiB/s [2024-11-26T18:34:33.654Z] 12657.57 IOPS, 49.44 MiB/s [2024-11-26T18:34:34.590Z] 12786.00 IOPS, 49.95 MiB/s [2024-11-26T18:34:35.968Z] 12858.78 IOPS, 50.23 MiB/s [2024-11-26T18:34:35.968Z] 12935.20 IOPS, 50.53 MiB/s 00:28:02.103 Latency(us) 00:28:02.103 [2024-11-26T18:34:35.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.103 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:02.103 Verification LBA range: start 0x0 length 0x4000 00:28:02.103 NVMe0n1 : 10.05 12971.68 50.67 0.00 0.00 78661.71 9502.72 67720.53 00:28:02.103 [2024-11-26T18:34:35.968Z] =================================================================================================================== 00:28:02.103 [2024-11-26T18:34:35.968Z] Total : 12971.68 50.67 0.00 0.00 78661.71 9502.72 67720.53 00:28:02.103 { 00:28:02.103 "results": [ 00:28:02.103 { 00:28:02.103 "job": "NVMe0n1", 00:28:02.103 "core_mask": "0x1", 00:28:02.103 "workload": "verify", 00:28:02.103 "status": "finished", 00:28:02.103 "verify_range": { 00:28:02.103 "start": 0, 00:28:02.103 "length": 16384 00:28:02.103 }, 00:28:02.103 "queue_depth": 1024, 00:28:02.103 "io_size": 4096, 00:28:02.103 "runtime": 10.046426, 00:28:02.103 "iops": 12971.677689160304, 00:28:02.103 "mibps": 50.67061597328244, 00:28:02.103 "io_failed": 0, 00:28:02.103 "io_timeout": 0, 00:28:02.103 "avg_latency_us": 78661.71092739099, 00:28:02.103 "min_latency_us": 9502.72, 00:28:02.103 "max_latency_us": 67720.53333333334 00:28:02.103 } 00:28:02.103 ], 00:28:02.103 "core_count": 1 00:28:02.103 } 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3964227 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3964227 ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3964227 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3964227 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3964227' 00:28:02.103 killing process with pid 3964227 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3964227 00:28:02.103 Received shutdown signal, test time was about 10.000000 seconds 00:28:02.103 00:28:02.103 Latency(us) 00:28:02.103 [2024-11-26T18:34:35.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.103 [2024-11-26T18:34:35.968Z] =================================================================================================================== 00:28:02.103 [2024-11-26T18:34:35.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3964227 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.103 rmmod nvme_tcp 00:28:02.103 rmmod nvme_fabrics 00:28:02.103 rmmod nvme_keyring 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3963951 ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3963951 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3963951 ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3963951 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3963951 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3963951' 00:28:02.103 killing process with pid 3963951 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3963951 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3963951 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.103 19:34:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.673 19:34:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.673 00:28:04.673 real 0m20.057s 00:28:04.673 user 0m23.894s 00:28:04.673 sys 0m5.553s 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:04.673 ************************************ 00:28:04.673 END TEST nvmf_queue_depth 00:28:04.673 ************************************ 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:04.673 ************************************ 00:28:04.673 START TEST nvmf_target_multipath 00:28:04.673 ************************************ 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:04.673 * Looking for test storage... 00:28:04.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:04.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.673 --rc genhtml_branch_coverage=1 00:28:04.673 --rc genhtml_function_coverage=1 00:28:04.673 --rc genhtml_legend=1 00:28:04.673 --rc geninfo_all_blocks=1 00:28:04.673 --rc geninfo_unexecuted_blocks=1 00:28:04.673 00:28:04.673 ' 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:04.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.673 --rc genhtml_branch_coverage=1 00:28:04.673 --rc genhtml_function_coverage=1 00:28:04.673 --rc genhtml_legend=1 00:28:04.673 --rc geninfo_all_blocks=1 00:28:04.673 --rc geninfo_unexecuted_blocks=1 00:28:04.673 00:28:04.673 ' 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:04.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.673 --rc genhtml_branch_coverage=1 00:28:04.673 --rc genhtml_function_coverage=1 00:28:04.673 --rc genhtml_legend=1 00:28:04.673 --rc geninfo_all_blocks=1 00:28:04.673 --rc geninfo_unexecuted_blocks=1 00:28:04.673 00:28:04.673 ' 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:04.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:04.673 --rc genhtml_branch_coverage=1 00:28:04.673 --rc genhtml_function_coverage=1 00:28:04.673 --rc genhtml_legend=1 00:28:04.673 --rc geninfo_all_blocks=1 00:28:04.673 --rc geninfo_unexecuted_blocks=1 00:28:04.673 00:28:04.673 ' 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:04.673 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:28:04.674 19:34:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:10.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:10.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:10.109 Found net devices under 0000:31:00.0: cvl_0_0 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:10.109 Found net devices under 0000:31:00.1: cvl_0_1 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.109 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:10.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:28:10.110 00:28:10.110 --- 10.0.0.2 ping statistics --- 00:28:10.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.110 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:28:10.110 00:28:10.110 --- 10.0.0.1 ping statistics --- 00:28:10.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.110 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:28:10.110 only one NIC for nvmf test 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:10.110 rmmod nvme_tcp 00:28:10.110 rmmod nvme_fabrics 00:28:10.110 rmmod nvme_keyring 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:10.110 19:34:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.019 00:28:12.019 real 0m7.458s 00:28:12.019 user 0m1.368s 00:28:12.019 sys 0m3.966s 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:12.019 ************************************ 00:28:12.019 END TEST nvmf_target_multipath 00:28:12.019 ************************************ 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:12.019 ************************************ 00:28:12.019 START TEST nvmf_zcopy 00:28:12.019 ************************************ 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:12.019 * Looking for test storage... 00:28:12.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:12.019 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:12.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.020 --rc genhtml_branch_coverage=1 00:28:12.020 --rc genhtml_function_coverage=1 00:28:12.020 --rc genhtml_legend=1 00:28:12.020 --rc geninfo_all_blocks=1 00:28:12.020 --rc geninfo_unexecuted_blocks=1 00:28:12.020 00:28:12.020 ' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:12.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.020 --rc genhtml_branch_coverage=1 00:28:12.020 --rc genhtml_function_coverage=1 00:28:12.020 --rc genhtml_legend=1 00:28:12.020 --rc geninfo_all_blocks=1 00:28:12.020 --rc geninfo_unexecuted_blocks=1 00:28:12.020 00:28:12.020 ' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:12.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.020 --rc genhtml_branch_coverage=1 00:28:12.020 --rc genhtml_function_coverage=1 00:28:12.020 --rc genhtml_legend=1 00:28:12.020 --rc geninfo_all_blocks=1 00:28:12.020 --rc geninfo_unexecuted_blocks=1 00:28:12.020 00:28:12.020 ' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:12.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:12.020 --rc genhtml_branch_coverage=1 00:28:12.020 --rc genhtml_function_coverage=1 00:28:12.020 --rc genhtml_legend=1 00:28:12.020 --rc geninfo_all_blocks=1 00:28:12.020 --rc geninfo_unexecuted_blocks=1 00:28:12.020 00:28:12.020 ' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:28:12.020 19:34:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:17.298 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.298 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:28:17.298 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:17.298 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:17.299 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:17.299 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:17.299 Found net devices under 0000:31:00.0: cvl_0_0 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:17.299 Found net devices under 0000:31:00.1: cvl_0_1 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:17.299 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.300 19:34:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:17.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:28:17.300 00:28:17.300 --- 10.0.0.2 ping statistics --- 00:28:17.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.300 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:28:17.300 00:28:17.300 --- 10.0.0.1 ping statistics --- 00:28:17.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.300 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:17.300 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3975232 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3975232 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3975232 ']' 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:17.560 19:34:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:17.560 [2024-11-26 19:34:51.231334] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:17.560 [2024-11-26 19:34:51.232487] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:28:17.560 [2024-11-26 19:34:51.232540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.560 [2024-11-26 19:34:51.324944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.560 [2024-11-26 19:34:51.375389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.560 [2024-11-26 19:34:51.375439] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.560 [2024-11-26 19:34:51.375448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.560 [2024-11-26 19:34:51.375455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.560 [2024-11-26 19:34:51.375462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.560 [2024-11-26 19:34:51.376247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.819 [2024-11-26 19:34:51.450374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:17.819 [2024-11-26 19:34:51.450630] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:18.388 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.388 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:28:18.388 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:18.388 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:18.388 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:18.388 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 [2024-11-26 19:34:52.045070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 [2024-11-26 19:34:52.061074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 malloc0 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:18.389 { 00:28:18.389 "params": { 00:28:18.389 "name": "Nvme$subsystem", 00:28:18.389 "trtype": "$TEST_TRANSPORT", 00:28:18.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:18.389 "adrfam": "ipv4", 00:28:18.389 "trsvcid": "$NVMF_PORT", 00:28:18.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:18.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:18.389 "hdgst": ${hdgst:-false}, 00:28:18.389 "ddgst": ${ddgst:-false} 00:28:18.389 }, 00:28:18.389 "method": "bdev_nvme_attach_controller" 00:28:18.389 } 00:28:18.389 EOF 00:28:18.389 )") 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:18.389 19:34:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:18.389 "params": { 00:28:18.389 "name": "Nvme1", 00:28:18.389 "trtype": "tcp", 00:28:18.389 "traddr": "10.0.0.2", 00:28:18.389 "adrfam": "ipv4", 00:28:18.389 "trsvcid": "4420", 00:28:18.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:18.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:18.389 "hdgst": false, 00:28:18.389 "ddgst": false 00:28:18.389 }, 00:28:18.389 "method": "bdev_nvme_attach_controller" 00:28:18.389 }' 00:28:18.389 [2024-11-26 19:34:52.126613] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:28:18.389 [2024-11-26 19:34:52.126660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3975582 ] 00:28:18.389 [2024-11-26 19:34:52.209094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.389 [2024-11-26 19:34:52.246145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.649 Running I/O for 10 seconds... 00:28:20.532 6629.00 IOPS, 51.79 MiB/s [2024-11-26T18:34:55.783Z] 7331.50 IOPS, 57.28 MiB/s [2024-11-26T18:34:56.725Z] 8185.00 IOPS, 63.95 MiB/s [2024-11-26T18:34:57.665Z] 8620.25 IOPS, 67.35 MiB/s [2024-11-26T18:34:58.607Z] 8877.40 IOPS, 69.35 MiB/s [2024-11-26T18:34:59.551Z] 9044.50 IOPS, 70.66 MiB/s [2024-11-26T18:35:00.492Z] 9168.86 IOPS, 71.63 MiB/s [2024-11-26T18:35:01.433Z] 9260.38 IOPS, 72.35 MiB/s [2024-11-26T18:35:02.817Z] 9332.78 IOPS, 72.91 MiB/s [2024-11-26T18:35:02.817Z] 9391.20 IOPS, 73.37 MiB/s 00:28:28.952 Latency(us) 00:28:28.952 [2024-11-26T18:35:02.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.952 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:28.952 Verification LBA range: start 0x0 length 0x1000 00:28:28.952 Nvme1n1 : 10.01 9395.23 73.40 0.00 0.00 13582.08 2457.60 26542.08 00:28:28.952 [2024-11-26T18:35:02.817Z] =================================================================================================================== 00:28:28.952 [2024-11-26T18:35:02.817Z] Total : 9395.23 73.40 0.00 0.00 13582.08 2457.60 26542.08 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3977695 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:28.952 { 00:28:28.952 "params": { 00:28:28.952 "name": "Nvme$subsystem", 00:28:28.952 "trtype": "$TEST_TRANSPORT", 00:28:28.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:28.952 "adrfam": "ipv4", 00:28:28.952 "trsvcid": "$NVMF_PORT", 00:28:28.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:28.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:28.952 "hdgst": ${hdgst:-false}, 00:28:28.952 "ddgst": ${ddgst:-false} 00:28:28.952 }, 00:28:28.952 "method": "bdev_nvme_attach_controller" 00:28:28.952 } 00:28:28.952 EOF 00:28:28.952 )") 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:28:28.952 [2024-11-26 19:35:02.528650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.528683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:28:28.952 19:35:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:28.952 "params": { 00:28:28.952 "name": "Nvme1", 00:28:28.952 "trtype": "tcp", 00:28:28.952 "traddr": "10.0.0.2", 00:28:28.952 "adrfam": "ipv4", 00:28:28.952 "trsvcid": "4420", 00:28:28.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:28.952 "hdgst": false, 00:28:28.952 "ddgst": false 00:28:28.952 }, 00:28:28.952 "method": "bdev_nvme_attach_controller" 00:28:28.952 }' 00:28:28.952 [2024-11-26 19:35:02.536619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.536629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.544616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.544625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.546356] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:28:28.952 [2024-11-26 19:35:02.546393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3977695 ] 00:28:28.952 [2024-11-26 19:35:02.552615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.552624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.560615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.560624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.568616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.568624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.576616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.576624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.584615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.584623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.592615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.592623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.600615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.600623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.602169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.952 [2024-11-26 19:35:02.608616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.608626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.616616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.616626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.624615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.624625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.631837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.952 [2024-11-26 19:35:02.632616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.952 [2024-11-26 19:35:02.632625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.952 [2024-11-26 19:35:02.640615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.640624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.648621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.648633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.656619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.656631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.664617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.664627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.672617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.672626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.680617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.680625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.688616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.688625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.696622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.696636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.704623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.704635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.712618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.712627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.720618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.720628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.728620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.728630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.736620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.736632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.744617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.744626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.752617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.752626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.760617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.760626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.768617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.768626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.776617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.776625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.784618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.784629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.792617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.792625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.800617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.800626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:28.953 [2024-11-26 19:35:02.808617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:28.953 [2024-11-26 19:35:02.808626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.816617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.816625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.824618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.824629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.832617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.832625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.840617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.840625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.848617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.848625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.856617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.856626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.864617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.864624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.872617] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.872626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.880628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.880645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.888619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.888627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 Running I/O for 5 seconds... 00:28:29.213 [2024-11-26 19:35:02.900509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.900525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.912990] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.913005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.925190] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.925206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.937735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.937751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.948767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.948782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.954548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.954562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.963167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.963182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.971933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.971948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.977664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.977679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.987744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.987760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:02.993467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:02.993482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.003497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.003512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.009242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.009256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.019443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.019459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.025469] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.025484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.036024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.036039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.041818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.041832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.051320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.051335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.059971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.059986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.065648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.065663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.213 [2024-11-26 19:35:03.075666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.213 [2024-11-26 19:35:03.075682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.081407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.081421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.091671] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.091686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.097387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.097402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.107282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.107303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.114719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.114734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.125452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.125466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.137812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.137828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.149894] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.149910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.474 [2024-11-26 19:35:03.161723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.474 [2024-11-26 19:35:03.161738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.172697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.172712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.178535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.178551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.187346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.187360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.196006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.196021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.201643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.201659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.212069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.212083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.217691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.217706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.227975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.227990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.233705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.233720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.243751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.243766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.249732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.249747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.259220] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.259234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.267934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.267949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.273830] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.273848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.284023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.284038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.289739] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.289754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.299312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.299326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.307390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.307405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.315409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.315423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.322845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.322860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.475 [2024-11-26 19:35:03.330498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.475 [2024-11-26 19:35:03.330513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.340843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.340858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.346679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.346694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.355414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.355429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.363409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.363425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.372141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.372156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.378106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.378121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.386751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.386766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.396193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.396207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.401939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.401954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.411481] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.411496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.417214] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.417228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.428600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.428619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.434624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.434639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.443331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.443346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.452016] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.452031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.457740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.457755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.467252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.467269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.474709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.474724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.485452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.485467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.497499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.497513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.509786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.509801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.521634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.521649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.533970] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.533985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.544764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.544779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.550512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.550527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.560050] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.560065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.566112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.566127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.574674] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.574689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.584380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.584395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.735 [2024-11-26 19:35:03.597004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.735 [2024-11-26 19:35:03.597018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.609732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.609751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.621802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.621818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.633687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.633702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.645413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.645428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.657612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.657627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.669721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.669735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.681576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.681591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.693932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.693947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.704600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.704616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.710653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.710668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.719334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.719349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.727807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.727822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.733968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.733983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.745630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.995 [2024-11-26 19:35:03.745645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.995 [2024-11-26 19:35:03.757832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.757847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.996 [2024-11-26 19:35:03.769542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.769556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.996 [2024-11-26 19:35:03.781550] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.781565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.996 [2024-11-26 19:35:03.793860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.793875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.996 [2024-11-26 19:35:03.805616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.805631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.996 [2024-11-26 19:35:03.817751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.817767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.996 [2024-11-26 19:35:03.829836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.829851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.996 [2024-11-26 19:35:03.841705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.841720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:29.996 [2024-11-26 19:35:03.853918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:29.996 [2024-11-26 19:35:03.853933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.865946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.865962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.877067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.877082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.889881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.889897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 19332.00 IOPS, 151.03 MiB/s [2024-11-26T18:35:04.122Z] [2024-11-26 19:35:03.901564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.901580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.913545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.913560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.925429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.925444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.937727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.937743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.949867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.949883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.961752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.961767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.973823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.973838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.984769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.984784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.990627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.990643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:03.999191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:03.999206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.007951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.007967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.013630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.013645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.023986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.024002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.029968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.029983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.039528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.039543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.045305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.045320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.055655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.055670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.061654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.061669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.072039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.072055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.077872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.077887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.088096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.088115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.093907] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.093923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.103420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.103435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.110975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.110990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.257 [2024-11-26 19:35:04.120406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.257 [2024-11-26 19:35:04.120421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.126125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.126140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.135524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.135539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.141245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.141261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.151620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.151635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.157618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.157633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.166983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.167002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.176482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.176497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.188819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.188834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.195064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.195079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.204106] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.204122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.209988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.210004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.219754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.219769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.225682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.225697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.235422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.235438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.241245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.241261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.251833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.251848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.257594] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.257609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.267589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.267604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.274648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.274664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.284279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.284294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.290210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.290225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.299687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.299703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.305642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.305657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.315378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.315393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.321210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.321229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.331354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.331369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.340081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.340096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.345829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.345844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.356263] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.356279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.361976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.361992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.371539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.371555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.520 [2024-11-26 19:35:04.379407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.520 [2024-11-26 19:35:04.379422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.387442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.387458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.393236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.393251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.403654] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.403670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.409277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.409291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.419041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.419055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.428414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.428429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.441056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.441071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.453487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.453502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.465751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.465767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.476585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.476601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.482309] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.482324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.491787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.491805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.497575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.497590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.507589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.507604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.513543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.513558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.523321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.523336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.529189] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.529203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.540451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.540466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.553200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.553215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.565682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.565697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.577795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.577810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.588532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.588547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.594172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.594187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.603250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.603265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.611889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.611904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.617627] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.617642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.628297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.628312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.634061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.634075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:30.782 [2024-11-26 19:35:04.643694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:30.782 [2024-11-26 19:35:04.643709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.649472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.649487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.659160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.659178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.668511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.668526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.681073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.681087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.693572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.693587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.705815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.705830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.717428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.717444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.729470] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.729484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.741573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.741588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.753433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.753448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.765939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.765954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.778032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.778047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.788785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.788800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.795008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.795023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.801890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.801904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.812587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.812602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.818296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.818310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.826886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.044 [2024-11-26 19:35:04.826900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.044 [2024-11-26 19:35:04.836520] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.836535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.045 [2024-11-26 19:35:04.842404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.842418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.045 [2024-11-26 19:35:04.851261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.851279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.045 [2024-11-26 19:35:04.860041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.860056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.045 [2024-11-26 19:35:04.865812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.865827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.045 [2024-11-26 19:35:04.875562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.875576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.045 [2024-11-26 19:35:04.881495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.881510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.045 [2024-11-26 19:35:04.892394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.892409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.045 19382.50 IOPS, 151.43 MiB/s [2024-11-26T18:35:04.910Z] [2024-11-26 19:35:04.905238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.045 [2024-11-26 19:35:04.905253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:04.917681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:04.917696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:04.929883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:04.929899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:04.941741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:04.941756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:04.953572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:04.953587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:04.966114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:04.966129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:04.976641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:04.976656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:04.982441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:04.982455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:04.991052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:04.991067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.000533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.000548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.006351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.006366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.015914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.015929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.021655] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.021670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.031270] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.031285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.038636] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.038651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.049485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.049500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.061694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.061709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.074086] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.074105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.084769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.084784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.090506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.090520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.099255] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.099270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.107912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.107927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.113640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.113655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.123628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.123643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.129356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.129370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.139321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.139335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.147401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.147415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.154984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.154998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.306 [2024-11-26 19:35:05.163526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.306 [2024-11-26 19:35:05.163541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.170664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.170678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.181477] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.181491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.193560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.193575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.205787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.205802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.217848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.217863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.229616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.229631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.242105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.242120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.252887] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.252902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.265715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.265730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.276530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.276545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.282178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.282193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.291913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.291928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.297732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.297747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.307296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.307310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.316107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.316121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.321589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.567 [2024-11-26 19:35:05.321603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.567 [2024-11-26 19:35:05.331720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.331735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.337435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.337450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.347723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.347739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.353385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.353399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.363673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.363687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.369445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.369463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.379885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.379900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.385634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.385649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.396004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.396020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.401677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.401692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.411105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.411121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.420451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.420467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.568 [2024-11-26 19:35:05.425949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.568 [2024-11-26 19:35:05.425965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.829 [2024-11-26 19:35:05.435454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.829 [2024-11-26 19:35:05.435470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.829 [2024-11-26 19:35:05.442732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.829 [2024-11-26 19:35:05.442748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.829 [2024-11-26 19:35:05.452561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.829 [2024-11-26 19:35:05.452577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.829 [2024-11-26 19:35:05.458167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.829 [2024-11-26 19:35:05.458182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.829 [2024-11-26 19:35:05.466736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.829 [2024-11-26 19:35:05.466751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.829 [2024-11-26 19:35:05.477433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.829 [2024-11-26 19:35:05.477448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.829 [2024-11-26 19:35:05.489801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.489817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.501418] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.501434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.513534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.513549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.525398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.525413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.537960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.537975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.548938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.548957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.561727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.561743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.573949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.573964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.585246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.585261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.597745] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.597760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.609207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.609222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.621266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.621281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.633692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.633707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.645884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.645899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.656516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.656532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.669343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.669358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:31.830 [2024-11-26 19:35:05.681411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:31.830 [2024-11-26 19:35:05.681426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.694037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.694052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.705772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.705787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.717823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.717838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.728707] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.728723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.734603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.734619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.743314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.743330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.751502] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.751517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.757347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.757368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.767333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.767349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.775366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.775381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.783420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.783435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.791112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.791127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.800602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.800618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.806382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.806397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.815188] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.815203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.823908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.823923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.829740] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.829756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.839698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.839713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.845551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.845565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.855686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.855701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.861420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.861434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.871698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.871714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.877403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.877417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.887272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.887287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.895982] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.895997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 19417.00 IOPS, 151.70 MiB/s [2024-11-26T18:35:05.956Z] [2024-11-26 19:35:05.902195] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.902210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.910959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.910974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.920169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.920184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.925753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.925768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.935868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.935883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.941546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.941561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.091 [2024-11-26 19:35:05.950952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.091 [2024-11-26 19:35:05.950967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:05.959848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:05.959863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:05.965576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:05.965590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:05.975754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:05.975769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:05.981544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:05.981559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:05.991758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:05.991774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:05.997639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:05.997654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.007405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.007420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.014653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.014667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.024329] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.024344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.030167] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.030182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.039212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.039227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.047797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.047812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.054069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.054084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.063144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.063159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.071820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.071835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.077588] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.077602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.088079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.088094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.093983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.093998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.103692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.103707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.109305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.353 [2024-11-26 19:35:06.109319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.353 [2024-11-26 19:35:06.119181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.119195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.128423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.128438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.141228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.141243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.153912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.153927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.164794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.164809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.170727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.170742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.179455] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.179469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.185281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.185296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.195703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.195717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.201546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.201561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.354 [2024-11-26 19:35:06.211197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.354 [2024-11-26 19:35:06.211212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.220524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.220539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.226229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.226244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.234975] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.234990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.244533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.244548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.250405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.250419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.259183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.259198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.267778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.267793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.273608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.273623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.283663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.283677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.289465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.289479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.300226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.300241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.313087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.313106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.326039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.326054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.336598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.336613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.342319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.342334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.351397] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.351412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.357254] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.357268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.367373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.367388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.376587] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.376602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.382392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.382406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.390945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.390959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.400198] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.400213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.405943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.405958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.414669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.414684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.425429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.425443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.437600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.437615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.449726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.449741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.462130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.462145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.614 [2024-11-26 19:35:06.473025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.614 [2024-11-26 19:35:06.473039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.485174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.485190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.497945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.497960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.508764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.508779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.514447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.514462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.523060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.523076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.532252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.532267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.538187] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.538202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.547359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.547374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.555402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.555417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.561294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.561312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.571427] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.571442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.580074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.580089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.585900] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.585914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.595565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.595579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.601269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.601283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.611428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.611443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.617194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.617209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.627516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.627531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.633319] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.633333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.643476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.643491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.652034] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.652049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.657767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.874 [2024-11-26 19:35:06.657782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.874 [2024-11-26 19:35:06.668175] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.668190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.673922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.673936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.683465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.683480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.689265] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.689279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.699590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.699605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.705269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.705283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.715696] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.715714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.721570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.721585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.731576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.731590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:32.875 [2024-11-26 19:35:06.737482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:32.875 [2024-11-26 19:35:06.737496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.747822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.747837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.753715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.753729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.763781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.763795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.769601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.769615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.779845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.779860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.785645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.785659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.794845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.794860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.804522] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.804537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.810140] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.810155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.818897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.818912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.828313] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.828328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.834096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.834115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.843525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.843539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.849314] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.849328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.859565] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.859580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.865359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.865378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.875281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.875296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.884219] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.884234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.889974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.889989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.899769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.899785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 19419.75 IOPS, 151.72 MiB/s [2024-11-26T18:35:07.000Z] [2024-11-26 19:35:06.912710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.135 [2024-11-26 19:35:06.912725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.135 [2024-11-26 19:35:06.918613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.918628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.927340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.927355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.934662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.934677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.944544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.944560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.950359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.950374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.959295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.959310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.968001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.968017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.973633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.973648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.983875] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.983891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.136 [2024-11-26 19:35:06.989576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.136 [2024-11-26 19:35:06.989592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:06.999508] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:06.999524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.005280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.005294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.015883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.015898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.021612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.021627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.030993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.031009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.040426] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.040442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.046208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.046223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.055555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.055571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.061445] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.061460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.071546] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.071561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.077406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.077422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.087405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.087420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.093196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.093211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.103642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.103657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.109243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.109258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.119694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.119709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.125489] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.125504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.135063] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.135079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.143910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.143926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.149472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.149487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.159341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.159356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.167871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.167887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.173663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.173679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.183429] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.183444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.189262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.189277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.199573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.199588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.205337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.205352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.215518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.215533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.221246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.221261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.231601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.231617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.237334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.237348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.247547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.247563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.397 [2024-11-26 19:35:07.253324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.397 [2024-11-26 19:35:07.253339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.263661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.263676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.269500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.269515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.279833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.279847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.285566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.285580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.295570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.295585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.301241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.301256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.311221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.311236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.319346] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.319362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.326908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.326923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.336046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.336061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.342228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.342243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.351193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.658 [2024-11-26 19:35:07.351209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.658 [2024-11-26 19:35:07.360092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.360113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.365785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.365800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.375880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.375895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.381693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.381709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.392040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.392056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.397838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.397853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.407357] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.407372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.413257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.413272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.422820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.422836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.433359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.433374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.445493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.445508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.457892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.457907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.469842] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.469858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.481724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.481739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.493802] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.493817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.505800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.505814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.659 [2024-11-26 19:35:07.517895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.659 [2024-11-26 19:35:07.517910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.919 [2024-11-26 19:35:07.528712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.919 [2024-11-26 19:35:07.528728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.534635] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.534651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.543456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.543471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.549344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.549359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.559549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.559565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.565308] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.565323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.575348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.575362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.583993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.584008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.589613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.589627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.599018] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.599033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.608119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.608135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.613968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.613982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.623846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.623861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.629464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.629478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.639742] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.639757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.645454] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.645469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.656062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.656080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.661862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.661877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.672242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.672257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.678221] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.678235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.686903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.686918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.696368] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.696383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.708992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.709007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.721563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.721578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.732566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.732582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.738306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.738321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.746863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.746878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.756441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.756456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.762114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.762129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.770840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.770855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:33.920 [2024-11-26 19:35:07.780392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:33.920 [2024-11-26 19:35:07.780407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.786373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.786387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.795115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.795130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.804432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.804447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.810161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.810176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.819200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.819219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.827978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.827992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.833625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.833640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.843252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.843267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.851414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.851429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.858684] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.858699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.868262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.868277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.873978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.873993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.884403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.884418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.890273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.890288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.899056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.899071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 19413.00 IOPS, 151.66 MiB/s [2024-11-26T18:35:08.044Z] [2024-11-26 19:35:07.946595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.946610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 00:28:34.179 Latency(us) 00:28:34.179 [2024-11-26T18:35:08.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.179 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:28:34.179 Nvme1n1 : 5.04 19267.05 150.52 0.00 0.00 6584.75 2239.15 47404.37 00:28:34.179 [2024-11-26T18:35:08.044Z] =================================================================================================================== 00:28:34.179 [2024-11-26T18:35:08.044Z] Total : 19267.05 150.52 0.00 0.00 6584.75 2239.15 47404.37 00:28:34.179 [2024-11-26 19:35:07.952621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.952633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.960619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.960630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.968622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.968633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.976623] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.976633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.984619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.984627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:07.992619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:07.992629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:08.000616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:08.000625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:08.008616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:08.008623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:08.016615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:08.016623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:08.024615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:08.024622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.179 [2024-11-26 19:35:08.032619] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.179 [2024-11-26 19:35:08.032627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.180 [2024-11-26 19:35:08.040616] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.180 [2024-11-26 19:35:08.040624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.439 [2024-11-26 19:35:08.048615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:34.439 [2024-11-26 19:35:08.048623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:34.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3977695) - No such process 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3977695 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:34.439 delay0 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.439 19:35:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:28:34.439 [2024-11-26 19:35:08.155547] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:41.015 Initializing NVMe Controllers 00:28:41.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.015 Initialization complete. Launching workers. 00:28:41.015 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 292, failed: 9712 00:28:41.015 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 9940, failed to submit 64 00:28:41.015 success 9846, unsuccessful 94, failed 0 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.015 rmmod nvme_tcp 00:28:41.015 rmmod nvme_fabrics 00:28:41.015 rmmod nvme_keyring 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3975232 ']' 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3975232 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3975232 ']' 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3975232 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3975232 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3975232' 00:28:41.015 killing process with pid 3975232 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3975232 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3975232 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.015 19:35:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.922 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.922 00:28:42.922 real 0m31.102s 00:28:42.922 user 0m41.921s 00:28:42.922 sys 0m9.655s 00:28:42.922 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:42.922 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:42.922 ************************************ 00:28:42.922 END TEST nvmf_zcopy 00:28:42.922 ************************************ 00:28:42.922 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:28:42.922 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:42.923 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:42.923 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:42.923 ************************************ 00:28:42.923 START TEST nvmf_nmic 00:28:42.923 ************************************ 00:28:42.923 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:28:42.923 * Looking for test storage... 00:28:42.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:42.923 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:42.923 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:28:42.923 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:43.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.183 --rc genhtml_branch_coverage=1 00:28:43.183 --rc genhtml_function_coverage=1 00:28:43.183 --rc genhtml_legend=1 00:28:43.183 --rc geninfo_all_blocks=1 00:28:43.183 --rc geninfo_unexecuted_blocks=1 00:28:43.183 00:28:43.183 ' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:43.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.183 --rc genhtml_branch_coverage=1 00:28:43.183 --rc genhtml_function_coverage=1 00:28:43.183 --rc genhtml_legend=1 00:28:43.183 --rc geninfo_all_blocks=1 00:28:43.183 --rc geninfo_unexecuted_blocks=1 00:28:43.183 00:28:43.183 ' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:43.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.183 --rc genhtml_branch_coverage=1 00:28:43.183 --rc genhtml_function_coverage=1 00:28:43.183 --rc genhtml_legend=1 00:28:43.183 --rc geninfo_all_blocks=1 00:28:43.183 --rc geninfo_unexecuted_blocks=1 00:28:43.183 00:28:43.183 ' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:43.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:43.183 --rc genhtml_branch_coverage=1 00:28:43.183 --rc genhtml_function_coverage=1 00:28:43.183 --rc genhtml_legend=1 00:28:43.183 --rc geninfo_all_blocks=1 00:28:43.183 --rc geninfo_unexecuted_blocks=1 00:28:43.183 00:28:43.183 ' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.183 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.184 19:35:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:48.597 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:48.597 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:48.598 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:48.598 Found net devices under 0000:31:00.0: cvl_0_0 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:48.598 Found net devices under 0000:31:00.1: cvl_0_1 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:28:48.598 00:28:48.598 --- 10.0.0.2 ping statistics --- 00:28:48.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.598 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:28:48.598 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:28:48.859 00:28:48.859 --- 10.0.0.1 ping statistics --- 00:28:48.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.859 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3984581 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3984581 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3984581 ']' 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:48.859 19:35:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:48.859 [2024-11-26 19:35:22.538118] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:48.859 [2024-11-26 19:35:22.539297] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:28:48.859 [2024-11-26 19:35:22.539350] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.859 [2024-11-26 19:35:22.619084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:48.859 [2024-11-26 19:35:22.659958] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.859 [2024-11-26 19:35:22.659999] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.859 [2024-11-26 19:35:22.660006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.859 [2024-11-26 19:35:22.660011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.859 [2024-11-26 19:35:22.660016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.859 [2024-11-26 19:35:22.661500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.859 [2024-11-26 19:35:22.661655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:48.859 [2024-11-26 19:35:22.661816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.859 [2024-11-26 19:35:22.661817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:48.859 [2024-11-26 19:35:22.718488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:48.859 [2024-11-26 19:35:22.718902] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:48.859 [2024-11-26 19:35:22.719565] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:48.859 [2024-11-26 19:35:22.719951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:48.859 [2024-11-26 19:35:22.719999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 [2024-11-26 19:35:23.358586] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 Malloc0 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 [2024-11-26 19:35:23.418342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:28:49.800 test case1: single bdev can't be used in multiple subsystems 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 [2024-11-26 19:35:23.442164] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:28:49.800 [2024-11-26 19:35:23.442179] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:28:49.800 [2024-11-26 19:35:23.442185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:49.800 request: 00:28:49.800 { 00:28:49.800 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:28:49.800 "namespace": { 00:28:49.800 "bdev_name": "Malloc0", 00:28:49.800 "no_auto_visible": false, 00:28:49.800 "hide_metadata": false 00:28:49.800 }, 00:28:49.800 "method": "nvmf_subsystem_add_ns", 00:28:49.800 "req_id": 1 00:28:49.800 } 00:28:49.800 Got JSON-RPC error response 00:28:49.800 response: 00:28:49.800 { 00:28:49.800 "code": -32602, 00:28:49.800 "message": "Invalid parameters" 00:28:49.800 } 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:28:49.800 Adding namespace failed - expected result. 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:28:49.800 test case2: host connect to nvmf target in multiple paths 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:49.800 [2024-11-26 19:35:23.450244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.800 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:28:50.060 19:35:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:28:50.319 19:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:28:50.319 19:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:28:50.319 19:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:28:50.319 19:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:28:50.319 19:35:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:28:52.853 19:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:28:52.853 19:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:28:52.853 19:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:28:52.853 19:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:28:52.853 19:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:28:52.853 19:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:28:52.853 19:35:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:28:52.853 [global] 00:28:52.853 thread=1 00:28:52.853 invalidate=1 00:28:52.853 rw=write 00:28:52.853 time_based=1 00:28:52.853 runtime=1 00:28:52.853 ioengine=libaio 00:28:52.853 direct=1 00:28:52.853 bs=4096 00:28:52.853 iodepth=1 00:28:52.853 norandommap=0 00:28:52.853 numjobs=1 00:28:52.853 00:28:52.853 verify_dump=1 00:28:52.853 verify_backlog=512 00:28:52.853 verify_state_save=0 00:28:52.853 do_verify=1 00:28:52.853 verify=crc32c-intel 00:28:52.853 [job0] 00:28:52.853 filename=/dev/nvme0n1 00:28:52.853 Could not set queue depth (nvme0n1) 00:28:52.853 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:28:52.853 fio-3.35 00:28:52.853 Starting 1 thread 00:28:53.789 00:28:53.789 job0: (groupid=0, jobs=1): err= 0: pid=3985726: Tue Nov 26 19:35:27 2024 00:28:53.789 read: IOPS=515, BW=2062KiB/s (2111kB/s)(2064KiB/1001msec) 00:28:53.789 slat (nsec): min=3571, max=36282, avg=14101.29, stdev=3508.95 00:28:53.789 clat (usec): min=481, max=1162, avg=859.58, stdev=78.40 00:28:53.789 lat (usec): min=497, max=1191, avg=873.68, stdev=78.67 00:28:53.789 clat percentiles (usec): 00:28:53.789 | 1.00th=[ 652], 5.00th=[ 734], 10.00th=[ 766], 20.00th=[ 807], 00:28:53.789 | 30.00th=[ 832], 40.00th=[ 848], 50.00th=[ 865], 60.00th=[ 881], 00:28:53.789 | 70.00th=[ 898], 80.00th=[ 914], 90.00th=[ 947], 95.00th=[ 971], 00:28:53.789 | 99.00th=[ 1057], 99.50th=[ 1057], 99.90th=[ 1156], 99.95th=[ 1156], 00:28:53.789 | 99.99th=[ 1156] 00:28:53.789 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:28:53.789 slat (usec): min=4, max=32092, avg=47.92, stdev=1002.40 00:28:53.789 clat (usec): min=181, max=788, avg=483.27, stdev=97.33 00:28:53.789 lat (usec): min=186, max=32826, avg=531.19, stdev=1015.28 00:28:53.789 clat percentiles (usec): 00:28:53.789 | 1.00th=[ 253], 5.00th=[ 306], 10.00th=[ 363], 20.00th=[ 404], 00:28:53.789 | 30.00th=[ 441], 40.00th=[ 457], 50.00th=[ 482], 60.00th=[ 506], 00:28:53.789 | 70.00th=[ 529], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 644], 00:28:53.789 | 99.00th=[ 709], 99.50th=[ 734], 99.90th=[ 758], 99.95th=[ 791], 00:28:53.789 | 99.99th=[ 791] 00:28:53.789 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:28:53.789 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:28:53.789 lat (usec) : 250=0.52%, 500=37.60%, 750=30.84%, 1000=30.32% 00:28:53.789 lat (msec) : 2=0.71% 00:28:53.789 cpu : usr=1.00%, sys=2.50%, ctx=1543, majf=0, minf=1 00:28:53.789 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.789 issued rwts: total=516,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.789 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:53.789 00:28:53.789 Run status group 0 (all jobs): 00:28:53.789 READ: bw=2062KiB/s (2111kB/s), 2062KiB/s-2062KiB/s (2111kB/s-2111kB/s), io=2064KiB (2114kB), run=1001-1001msec 00:28:53.789 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:28:53.789 00:28:53.789 Disk stats (read/write): 00:28:53.789 nvme0n1: ios=538/822, merge=0/0, ticks=1404/390, in_queue=1794, util=98.90% 00:28:53.789 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:54.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:54.049 rmmod nvme_tcp 00:28:54.049 rmmod nvme_fabrics 00:28:54.049 rmmod nvme_keyring 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3984581 ']' 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3984581 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3984581 ']' 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3984581 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3984581 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3984581' 00:28:54.049 killing process with pid 3984581 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3984581 00:28:54.049 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3984581 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.309 19:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.213 19:35:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:56.213 00:28:56.213 real 0m13.297s 00:28:56.213 user 0m31.314s 00:28:56.213 sys 0m5.942s 00:28:56.213 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:56.213 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:28:56.213 ************************************ 00:28:56.213 END TEST nvmf_nmic 00:28:56.213 ************************************ 00:28:56.213 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:28:56.213 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:56.213 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:56.213 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:56.213 ************************************ 00:28:56.213 START TEST nvmf_fio_target 00:28:56.213 ************************************ 00:28:56.213 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:28:56.473 * Looking for test storage... 00:28:56.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:56.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.473 --rc genhtml_branch_coverage=1 00:28:56.473 --rc genhtml_function_coverage=1 00:28:56.473 --rc genhtml_legend=1 00:28:56.473 --rc geninfo_all_blocks=1 00:28:56.473 --rc geninfo_unexecuted_blocks=1 00:28:56.473 00:28:56.473 ' 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:56.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.473 --rc genhtml_branch_coverage=1 00:28:56.473 --rc genhtml_function_coverage=1 00:28:56.473 --rc genhtml_legend=1 00:28:56.473 --rc geninfo_all_blocks=1 00:28:56.473 --rc geninfo_unexecuted_blocks=1 00:28:56.473 00:28:56.473 ' 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:56.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.473 --rc genhtml_branch_coverage=1 00:28:56.473 --rc genhtml_function_coverage=1 00:28:56.473 --rc genhtml_legend=1 00:28:56.473 --rc geninfo_all_blocks=1 00:28:56.473 --rc geninfo_unexecuted_blocks=1 00:28:56.473 00:28:56.473 ' 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:56.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:56.473 --rc genhtml_branch_coverage=1 00:28:56.473 --rc genhtml_function_coverage=1 00:28:56.473 --rc genhtml_legend=1 00:28:56.473 --rc geninfo_all_blocks=1 00:28:56.473 --rc geninfo_unexecuted_blocks=1 00:28:56.473 00:28:56.473 ' 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:56.473 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:56.474 19:35:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:01.750 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:01.750 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.750 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:01.751 Found net devices under 0000:31:00.0: cvl_0_0 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:01.751 Found net devices under 0000:31:00.1: cvl_0_1 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:01.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:01.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:29:01.751 00:29:01.751 --- 10.0.0.2 ping statistics --- 00:29:01.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.751 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:01.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:01.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:29:01.751 00:29:01.751 --- 10.0.0.1 ping statistics --- 00:29:01.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:01.751 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3990155 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3990155 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3990155 ']' 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.751 19:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:01.751 [2024-11-26 19:35:35.518536] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:01.752 [2024-11-26 19:35:35.519656] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:29:01.752 [2024-11-26 19:35:35.519704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.752 [2024-11-26 19:35:35.612691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:02.011 [2024-11-26 19:35:35.666800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:02.011 [2024-11-26 19:35:35.666856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:02.011 [2024-11-26 19:35:35.666865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:02.011 [2024-11-26 19:35:35.666873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:02.011 [2024-11-26 19:35:35.666879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:02.011 [2024-11-26 19:35:35.669006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.011 [2024-11-26 19:35:35.669233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:02.011 [2024-11-26 19:35:35.669543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:02.011 [2024-11-26 19:35:35.669545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.011 [2024-11-26 19:35:35.751743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:02.011 [2024-11-26 19:35:35.752425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:02.011 [2024-11-26 19:35:35.752751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:02.011 [2024-11-26 19:35:35.753249] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:02.011 [2024-11-26 19:35:35.753413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:02.577 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.577 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:29:02.577 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:02.577 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:02.577 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:02.577 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:02.577 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:02.836 [2024-11-26 19:35:36.470580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:02.836 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:02.836 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:02.836 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.095 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:03.095 19:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.353 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:03.353 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.353 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:03.353 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:03.613 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.871 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:03.871 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:03.871 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:03.871 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:04.131 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:04.131 19:35:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:04.391 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:04.391 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:04.391 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:04.650 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:04.650 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:04.650 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:04.909 [2024-11-26 19:35:38.654403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:04.909 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:05.167 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:05.167 19:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:05.734 19:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:05.734 19:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:29:05.734 19:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:05.734 19:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:29:05.734 19:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:29:05.734 19:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:29:07.633 19:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:07.633 19:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:07.633 19:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:07.633 19:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:29:07.633 19:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:07.634 19:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:29:07.634 19:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:07.634 [global] 00:29:07.634 thread=1 00:29:07.634 invalidate=1 00:29:07.634 rw=write 00:29:07.634 time_based=1 00:29:07.634 runtime=1 00:29:07.634 ioengine=libaio 00:29:07.634 direct=1 00:29:07.634 bs=4096 00:29:07.634 iodepth=1 00:29:07.634 norandommap=0 00:29:07.634 numjobs=1 00:29:07.634 00:29:07.634 verify_dump=1 00:29:07.634 verify_backlog=512 00:29:07.634 verify_state_save=0 00:29:07.634 do_verify=1 00:29:07.634 verify=crc32c-intel 00:29:07.634 [job0] 00:29:07.634 filename=/dev/nvme0n1 00:29:07.634 [job1] 00:29:07.634 filename=/dev/nvme0n2 00:29:07.634 [job2] 00:29:07.634 filename=/dev/nvme0n3 00:29:07.634 [job3] 00:29:07.634 filename=/dev/nvme0n4 00:29:07.634 Could not set queue depth (nvme0n1) 00:29:07.634 Could not set queue depth (nvme0n2) 00:29:07.634 Could not set queue depth (nvme0n3) 00:29:07.634 Could not set queue depth (nvme0n4) 00:29:08.201 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:08.201 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:08.201 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:08.201 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:08.201 fio-3.35 00:29:08.201 Starting 4 threads 00:29:09.143 00:29:09.143 job0: (groupid=0, jobs=1): err= 0: pid=3991728: Tue Nov 26 19:35:42 2024 00:29:09.143 read: IOPS=43, BW=176KiB/s (180kB/s)(176KiB/1002msec) 00:29:09.143 slat (nsec): min=10927, max=26624, avg=22375.45, stdev=5402.45 00:29:09.143 clat (usec): min=1066, max=42065, avg=15051.75, stdev=19393.97 00:29:09.143 lat (usec): min=1083, max=42091, avg=15074.13, stdev=19395.39 00:29:09.143 clat percentiles (usec): 00:29:09.143 | 1.00th=[ 1074], 5.00th=[ 1139], 10.00th=[ 1188], 20.00th=[ 1237], 00:29:09.143 | 30.00th=[ 1254], 40.00th=[ 1287], 50.00th=[ 1319], 60.00th=[ 1369], 00:29:09.143 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:29:09.143 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:09.143 | 99.99th=[42206] 00:29:09.143 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:29:09.143 slat (nsec): min=4026, max=44855, avg=13188.81, stdev=4099.42 00:29:09.143 clat (usec): min=172, max=1022, avg=643.41, stdev=131.80 00:29:09.143 lat (usec): min=184, max=1036, avg=656.60, stdev=133.05 00:29:09.143 clat percentiles (usec): 00:29:09.143 | 1.00th=[ 326], 5.00th=[ 433], 10.00th=[ 469], 20.00th=[ 537], 00:29:09.143 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 676], 00:29:09.143 | 70.00th=[ 709], 80.00th=[ 758], 90.00th=[ 816], 95.00th=[ 848], 00:29:09.143 | 99.00th=[ 930], 99.50th=[ 963], 99.90th=[ 1020], 99.95th=[ 1020], 00:29:09.143 | 99.99th=[ 1020] 00:29:09.143 bw ( KiB/s): min= 4096, max= 4096, per=37.70%, avg=4096.00, stdev= 0.00, samples=1 00:29:09.143 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:09.143 lat (usec) : 250=0.36%, 500=12.77%, 750=58.99%, 1000=19.78% 00:29:09.143 lat (msec) : 2=5.40%, 50=2.70% 00:29:09.143 cpu : usr=0.40%, sys=0.50%, ctx=559, majf=0, minf=1 00:29:09.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.143 issued rwts: total=44,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:09.143 job1: (groupid=0, jobs=1): err= 0: pid=3991734: Tue Nov 26 19:35:42 2024 00:29:09.143 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:29:09.143 slat (nsec): min=2856, max=45100, avg=21521.37, stdev=7293.67 00:29:09.143 clat (usec): min=481, max=41764, avg=1028.20, stdev=1806.82 00:29:09.143 lat (usec): min=508, max=41775, avg=1049.72, stdev=1806.43 00:29:09.143 clat percentiles (usec): 00:29:09.143 | 1.00th=[ 709], 5.00th=[ 775], 10.00th=[ 816], 20.00th=[ 865], 00:29:09.143 | 30.00th=[ 898], 40.00th=[ 930], 50.00th=[ 955], 60.00th=[ 971], 00:29:09.143 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1123], 00:29:09.143 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[41681], 99.95th=[41681], 00:29:09.143 | 99.99th=[41681] 00:29:09.143 write: IOPS=735, BW=2941KiB/s (3012kB/s)(2944KiB/1001msec); 0 zone resets 00:29:09.143 slat (nsec): min=3523, max=69541, avg=15319.28, stdev=7962.32 00:29:09.143 clat (usec): min=180, max=1118, avg=604.21, stdev=143.59 00:29:09.143 lat (usec): min=191, max=1132, avg=619.53, stdev=144.54 00:29:09.143 clat percentiles (usec): 00:29:09.143 | 1.00th=[ 265], 5.00th=[ 355], 10.00th=[ 416], 20.00th=[ 486], 00:29:09.143 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 611], 60.00th=[ 635], 00:29:09.143 | 70.00th=[ 676], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 840], 00:29:09.143 | 99.00th=[ 947], 99.50th=[ 971], 99.90th=[ 1123], 99.95th=[ 1123], 00:29:09.143 | 99.99th=[ 1123] 00:29:09.143 bw ( KiB/s): min= 4096, max= 4096, per=37.70%, avg=4096.00, stdev= 0.00, samples=1 00:29:09.143 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:09.143 lat (usec) : 250=0.56%, 500=12.98%, 750=37.74%, 1000=36.54% 00:29:09.143 lat (msec) : 2=12.10%, 50=0.08% 00:29:09.143 cpu : usr=2.40%, sys=2.80%, ctx=1249, majf=0, minf=2 00:29:09.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.143 issued rwts: total=512,736,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:09.143 job2: (groupid=0, jobs=1): err= 0: pid=3991751: Tue Nov 26 19:35:42 2024 00:29:09.143 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:29:09.143 slat (nsec): min=3775, max=57951, avg=20138.67, stdev=8407.49 00:29:09.143 clat (usec): min=394, max=1241, avg=844.89, stdev=140.47 00:29:09.143 lat (usec): min=406, max=1253, avg=865.02, stdev=140.48 00:29:09.143 clat percentiles (usec): 00:29:09.143 | 1.00th=[ 545], 5.00th=[ 594], 10.00th=[ 652], 20.00th=[ 709], 00:29:09.143 | 30.00th=[ 775], 40.00th=[ 824], 50.00th=[ 865], 60.00th=[ 898], 00:29:09.143 | 70.00th=[ 930], 80.00th=[ 963], 90.00th=[ 996], 95.00th=[ 1057], 00:29:09.143 | 99.00th=[ 1156], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:29:09.143 | 99.99th=[ 1237] 00:29:09.143 write: IOPS=977, BW=3908KiB/s (4002kB/s)(3912KiB/1001msec); 0 zone resets 00:29:09.143 slat (nsec): min=4312, max=53085, avg=17538.09, stdev=9495.32 00:29:09.143 clat (usec): min=99, max=1062, avg=543.26, stdev=134.84 00:29:09.143 lat (usec): min=114, max=1097, avg=560.79, stdev=135.55 00:29:09.143 clat percentiles (usec): 00:29:09.143 | 1.00th=[ 251], 5.00th=[ 338], 10.00th=[ 367], 20.00th=[ 424], 00:29:09.143 | 30.00th=[ 469], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 578], 00:29:09.143 | 70.00th=[ 611], 80.00th=[ 660], 90.00th=[ 709], 95.00th=[ 750], 00:29:09.143 | 99.00th=[ 881], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:29:09.143 | 99.99th=[ 1057] 00:29:09.143 bw ( KiB/s): min= 4096, max= 4096, per=37.70%, avg=4096.00, stdev= 0.00, samples=1 00:29:09.143 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:09.143 lat (usec) : 100=0.07%, 250=0.40%, 500=23.49%, 750=47.32%, 1000=25.30% 00:29:09.143 lat (msec) : 2=3.42% 00:29:09.143 cpu : usr=1.30%, sys=2.70%, ctx=1493, majf=0, minf=1 00:29:09.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.143 issued rwts: total=512,978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:09.143 job3: (groupid=0, jobs=1): err= 0: pid=3991759: Tue Nov 26 19:35:42 2024 00:29:09.143 read: IOPS=387, BW=1552KiB/s (1589kB/s)(1564KiB/1008msec) 00:29:09.143 slat (nsec): min=3726, max=43879, avg=20074.86, stdev=7330.96 00:29:09.143 clat (usec): min=586, max=41925, avg=1629.43, stdev=5000.78 00:29:09.143 lat (usec): min=596, max=41951, avg=1649.50, stdev=5000.61 00:29:09.143 clat percentiles (usec): 00:29:09.143 | 1.00th=[ 709], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 914], 00:29:09.143 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 1012], 60.00th=[ 1045], 00:29:09.143 | 70.00th=[ 1057], 80.00th=[ 1106], 90.00th=[ 1156], 95.00th=[ 1205], 00:29:09.143 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:29:09.143 | 99.99th=[41681] 00:29:09.143 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:29:09.143 slat (nsec): min=4248, max=45882, avg=13044.93, stdev=3596.62 00:29:09.143 clat (usec): min=231, max=1103, avg=686.07, stdev=147.35 00:29:09.143 lat (usec): min=236, max=1117, avg=699.12, stdev=148.16 00:29:09.143 clat percentiles (usec): 00:29:09.143 | 1.00th=[ 347], 5.00th=[ 449], 10.00th=[ 494], 20.00th=[ 553], 00:29:09.143 | 30.00th=[ 603], 40.00th=[ 644], 50.00th=[ 693], 60.00th=[ 742], 00:29:09.143 | 70.00th=[ 775], 80.00th=[ 807], 90.00th=[ 873], 95.00th=[ 930], 00:29:09.143 | 99.00th=[ 1012], 99.50th=[ 1057], 99.90th=[ 1106], 99.95th=[ 1106], 00:29:09.143 | 99.99th=[ 1106] 00:29:09.143 bw ( KiB/s): min= 4096, max= 4096, per=37.70%, avg=4096.00, stdev= 0.00, samples=1 00:29:09.143 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:09.143 lat (usec) : 250=0.11%, 500=6.09%, 750=30.12%, 1000=38.87% 00:29:09.143 lat (msec) : 2=24.14%, 50=0.66% 00:29:09.143 cpu : usr=1.09%, sys=1.09%, ctx=904, majf=0, minf=1 00:29:09.143 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:09.143 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.143 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:09.143 issued rwts: total=391,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:09.143 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:09.144 00:29:09.144 Run status group 0 (all jobs): 00:29:09.144 READ: bw=5790KiB/s (5929kB/s), 176KiB/s-2046KiB/s (180kB/s-2095kB/s), io=5836KiB (5976kB), run=1001-1008msec 00:29:09.144 WRITE: bw=10.6MiB/s (11.1MB/s), 2032KiB/s-3908KiB/s (2081kB/s-4002kB/s), io=10.7MiB (11.2MB), run=1001-1008msec 00:29:09.144 00:29:09.144 Disk stats (read/write): 00:29:09.144 nvme0n1: ios=37/512, merge=0/0, ticks=1424/324, in_queue=1748, util=96.59% 00:29:09.144 nvme0n2: ios=500/512, merge=0/0, ticks=500/239, in_queue=739, util=86.62% 00:29:09.144 nvme0n3: ios=540/661, merge=0/0, ticks=1328/355, in_queue=1683, util=96.61% 00:29:09.144 nvme0n4: ios=340/512, merge=0/0, ticks=457/346, in_queue=803, util=89.39% 00:29:09.144 19:35:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:09.429 [global] 00:29:09.429 thread=1 00:29:09.429 invalidate=1 00:29:09.429 rw=randwrite 00:29:09.429 time_based=1 00:29:09.429 runtime=1 00:29:09.429 ioengine=libaio 00:29:09.429 direct=1 00:29:09.429 bs=4096 00:29:09.429 iodepth=1 00:29:09.429 norandommap=0 00:29:09.429 numjobs=1 00:29:09.429 00:29:09.429 verify_dump=1 00:29:09.429 verify_backlog=512 00:29:09.429 verify_state_save=0 00:29:09.429 do_verify=1 00:29:09.429 verify=crc32c-intel 00:29:09.429 [job0] 00:29:09.429 filename=/dev/nvme0n1 00:29:09.429 [job1] 00:29:09.429 filename=/dev/nvme0n2 00:29:09.429 [job2] 00:29:09.429 filename=/dev/nvme0n3 00:29:09.429 [job3] 00:29:09.429 filename=/dev/nvme0n4 00:29:09.429 Could not set queue depth (nvme0n1) 00:29:09.429 Could not set queue depth (nvme0n2) 00:29:09.429 Could not set queue depth (nvme0n3) 00:29:09.429 Could not set queue depth (nvme0n4) 00:29:09.690 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:09.690 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:09.690 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:09.690 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:09.690 fio-3.35 00:29:09.690 Starting 4 threads 00:29:11.068 00:29:11.068 job0: (groupid=0, jobs=1): err= 0: pid=3992254: Tue Nov 26 19:35:44 2024 00:29:11.068 read: IOPS=18, BW=74.5KiB/s (76.3kB/s)(76.0KiB/1020msec) 00:29:11.068 slat (nsec): min=10963, max=26387, avg=24792.58, stdev=4254.91 00:29:11.068 clat (usec): min=40912, max=41902, avg=41103.09, stdev=304.13 00:29:11.068 lat (usec): min=40938, max=41929, avg=41127.88, stdev=303.23 00:29:11.068 clat percentiles (usec): 00:29:11.068 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:11.068 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:11.068 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:29:11.068 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:29:11.068 | 99.99th=[41681] 00:29:11.068 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:29:11.068 slat (nsec): min=3223, max=69898, avg=13098.27, stdev=5628.38 00:29:11.068 clat (usec): min=153, max=772, avg=448.20, stdev=100.58 00:29:11.068 lat (usec): min=171, max=785, avg=461.30, stdev=101.86 00:29:11.068 clat percentiles (usec): 00:29:11.068 | 1.00th=[ 251], 5.00th=[ 293], 10.00th=[ 318], 20.00th=[ 359], 00:29:11.068 | 30.00th=[ 392], 40.00th=[ 408], 50.00th=[ 445], 60.00th=[ 486], 00:29:11.068 | 70.00th=[ 506], 80.00th=[ 537], 90.00th=[ 586], 95.00th=[ 611], 00:29:11.068 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[ 775], 99.95th=[ 775], 00:29:11.068 | 99.99th=[ 775] 00:29:11.068 bw ( KiB/s): min= 4096, max= 4096, per=44.82%, avg=4096.00, stdev= 0.00, samples=1 00:29:11.068 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:11.068 lat (usec) : 250=0.94%, 500=64.60%, 750=30.51%, 1000=0.38% 00:29:11.068 lat (msec) : 50=3.58% 00:29:11.068 cpu : usr=0.39%, sys=1.18%, ctx=532, majf=0, minf=1 00:29:11.068 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.068 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.068 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:11.068 job1: (groupid=0, jobs=1): err= 0: pid=3992262: Tue Nov 26 19:35:44 2024 00:29:11.068 read: IOPS=17, BW=71.3KiB/s (73.0kB/s)(72.0KiB/1010msec) 00:29:11.068 slat (nsec): min=12240, max=17212, avg=14409.44, stdev=1474.93 00:29:11.068 clat (usec): min=41669, max=42189, avg=41958.95, stdev=132.23 00:29:11.068 lat (usec): min=41683, max=42203, avg=41973.36, stdev=132.47 00:29:11.068 clat percentiles (usec): 00:29:11.068 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:29:11.068 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:29:11.068 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:11.068 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:11.069 | 99.99th=[42206] 00:29:11.069 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:29:11.069 slat (nsec): min=3541, max=47330, avg=15504.79, stdev=5660.70 00:29:11.069 clat (usec): min=95, max=1128, avg=475.71, stdev=213.20 00:29:11.069 lat (usec): min=110, max=1147, avg=491.21, stdev=214.20 00:29:11.069 clat percentiles (usec): 00:29:11.069 | 1.00th=[ 206], 5.00th=[ 231], 10.00th=[ 251], 20.00th=[ 297], 00:29:11.069 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 396], 60.00th=[ 469], 00:29:11.069 | 70.00th=[ 578], 80.00th=[ 709], 90.00th=[ 783], 95.00th=[ 873], 00:29:11.069 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1123], 99.95th=[ 1123], 00:29:11.069 | 99.99th=[ 1123] 00:29:11.069 bw ( KiB/s): min= 4096, max= 4096, per=44.82%, avg=4096.00, stdev= 0.00, samples=1 00:29:11.069 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:11.069 lat (usec) : 100=0.19%, 250=9.43%, 500=51.89%, 750=22.26%, 1000=11.32% 00:29:11.069 lat (msec) : 2=1.51%, 50=3.40% 00:29:11.069 cpu : usr=0.40%, sys=1.39%, ctx=533, majf=0, minf=1 00:29:11.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.069 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:11.069 job2: (groupid=0, jobs=1): err= 0: pid=3992274: Tue Nov 26 19:35:44 2024 00:29:11.069 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:29:11.069 slat (nsec): min=10760, max=26706, avg=15117.02, stdev=2814.25 00:29:11.069 clat (usec): min=656, max=1368, avg=953.78, stdev=77.40 00:29:11.069 lat (usec): min=672, max=1390, avg=968.90, stdev=77.61 00:29:11.069 clat percentiles (usec): 00:29:11.069 | 1.00th=[ 750], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 906], 00:29:11.069 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 955], 60.00th=[ 971], 00:29:11.069 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1074], 00:29:11.069 | 99.00th=[ 1156], 99.50th=[ 1172], 99.90th=[ 1369], 99.95th=[ 1369], 00:29:11.069 | 99.99th=[ 1369] 00:29:11.069 write: IOPS=839, BW=3357KiB/s (3437kB/s)(3360KiB/1001msec); 0 zone resets 00:29:11.069 slat (nsec): min=3943, max=44905, avg=12760.91, stdev=3588.54 00:29:11.069 clat (usec): min=245, max=1058, avg=581.15, stdev=152.49 00:29:11.069 lat (usec): min=250, max=1071, avg=593.91, stdev=153.15 00:29:11.069 clat percentiles (usec): 00:29:11.069 | 1.00th=[ 277], 5.00th=[ 351], 10.00th=[ 408], 20.00th=[ 441], 00:29:11.069 | 30.00th=[ 490], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 603], 00:29:11.069 | 70.00th=[ 652], 80.00th=[ 717], 90.00th=[ 791], 95.00th=[ 865], 00:29:11.069 | 99.00th=[ 947], 99.50th=[ 1012], 99.90th=[ 1057], 99.95th=[ 1057], 00:29:11.069 | 99.99th=[ 1057] 00:29:11.069 bw ( KiB/s): min= 4096, max= 4096, per=44.82%, avg=4096.00, stdev= 0.00, samples=1 00:29:11.069 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:11.069 lat (usec) : 250=0.22%, 500=20.12%, 750=32.77%, 1000=38.39% 00:29:11.069 lat (msec) : 2=8.51% 00:29:11.069 cpu : usr=0.70%, sys=2.00%, ctx=1353, majf=0, minf=2 00:29:11.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.069 issued rwts: total=512,840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:11.069 job3: (groupid=0, jobs=1): err= 0: pid=3992282: Tue Nov 26 19:35:44 2024 00:29:11.069 read: IOPS=17, BW=69.2KiB/s (70.9kB/s)(72.0KiB/1040msec) 00:29:11.069 slat (nsec): min=4792, max=25686, avg=23390.72, stdev=5783.98 00:29:11.069 clat (usec): min=1015, max=42070, avg=39660.45, stdev=9645.36 00:29:11.069 lat (usec): min=1019, max=42095, avg=39683.84, stdev=9650.04 00:29:11.069 clat percentiles (usec): 00:29:11.069 | 1.00th=[ 1012], 5.00th=[ 1012], 10.00th=[41681], 20.00th=[41681], 00:29:11.069 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:29:11.069 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:11.069 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:11.069 | 99.99th=[42206] 00:29:11.069 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:29:11.069 slat (nsec): min=3901, max=71953, avg=12345.93, stdev=4066.47 00:29:11.069 clat (usec): min=124, max=1084, avg=619.40, stdev=159.63 00:29:11.069 lat (usec): min=129, max=1097, avg=631.75, stdev=160.78 00:29:11.069 clat percentiles (usec): 00:29:11.069 | 1.00th=[ 258], 5.00th=[ 351], 10.00th=[ 420], 20.00th=[ 490], 00:29:11.069 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 660], 00:29:11.069 | 70.00th=[ 701], 80.00th=[ 750], 90.00th=[ 832], 95.00th=[ 889], 00:29:11.069 | 99.00th=[ 979], 99.50th=[ 1029], 99.90th=[ 1090], 99.95th=[ 1090], 00:29:11.069 | 99.99th=[ 1090] 00:29:11.069 bw ( KiB/s): min= 4096, max= 4096, per=44.82%, avg=4096.00, stdev= 0.00, samples=1 00:29:11.069 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:11.069 lat (usec) : 250=0.94%, 500=20.57%, 750=55.85%, 1000=18.30% 00:29:11.069 lat (msec) : 2=1.13%, 50=3.21% 00:29:11.069 cpu : usr=0.10%, sys=0.67%, ctx=531, majf=0, minf=1 00:29:11.069 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:11.069 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.069 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:11.069 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:11.069 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:11.069 00:29:11.069 Run status group 0 (all jobs): 00:29:11.069 READ: bw=2181KiB/s (2233kB/s), 69.2KiB/s-2046KiB/s (70.9kB/s-2095kB/s), io=2268KiB (2322kB), run=1001-1040msec 00:29:11.069 WRITE: bw=9138KiB/s (9358kB/s), 1969KiB/s-3357KiB/s (2016kB/s-3437kB/s), io=9504KiB (9732kB), run=1001-1040msec 00:29:11.069 00:29:11.069 Disk stats (read/write): 00:29:11.069 nvme0n1: ios=64/512, merge=0/0, ticks=643/168, in_queue=811, util=87.68% 00:29:11.069 nvme0n2: ios=60/512, merge=0/0, ticks=782/190, in_queue=972, util=97.76% 00:29:11.069 nvme0n3: ios=512/517, merge=0/0, ticks=466/310, in_queue=776, util=88.36% 00:29:11.069 nvme0n4: ios=13/512, merge=0/0, ticks=504/312, in_queue=816, util=89.40% 00:29:11.069 19:35:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:11.069 [global] 00:29:11.069 thread=1 00:29:11.069 invalidate=1 00:29:11.069 rw=write 00:29:11.069 time_based=1 00:29:11.069 runtime=1 00:29:11.069 ioengine=libaio 00:29:11.069 direct=1 00:29:11.069 bs=4096 00:29:11.069 iodepth=128 00:29:11.069 norandommap=0 00:29:11.069 numjobs=1 00:29:11.069 00:29:11.069 verify_dump=1 00:29:11.069 verify_backlog=512 00:29:11.069 verify_state_save=0 00:29:11.069 do_verify=1 00:29:11.069 verify=crc32c-intel 00:29:11.069 [job0] 00:29:11.069 filename=/dev/nvme0n1 00:29:11.069 [job1] 00:29:11.069 filename=/dev/nvme0n2 00:29:11.069 [job2] 00:29:11.069 filename=/dev/nvme0n3 00:29:11.069 [job3] 00:29:11.069 filename=/dev/nvme0n4 00:29:11.069 Could not set queue depth (nvme0n1) 00:29:11.069 Could not set queue depth (nvme0n2) 00:29:11.069 Could not set queue depth (nvme0n3) 00:29:11.069 Could not set queue depth (nvme0n4) 00:29:11.329 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:11.329 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:11.329 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:11.329 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:11.329 fio-3.35 00:29:11.329 Starting 4 threads 00:29:12.709 00:29:12.709 job0: (groupid=0, jobs=1): err= 0: pid=3992780: Tue Nov 26 19:35:46 2024 00:29:12.709 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:29:12.709 slat (nsec): min=1014, max=12661k, avg=151763.41, stdev=947423.14 00:29:12.709 clat (usec): min=4127, max=74449, avg=16575.24, stdev=10225.78 00:29:12.709 lat (usec): min=4129, max=74457, avg=16727.00, stdev=10327.72 00:29:12.709 clat percentiles (usec): 00:29:12.709 | 1.00th=[ 5211], 5.00th=[ 7111], 10.00th=[ 8225], 20.00th=[11076], 00:29:12.709 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13304], 60.00th=[14877], 00:29:12.709 | 70.00th=[16581], 80.00th=[20317], 90.00th=[25560], 95.00th=[42206], 00:29:12.709 | 99.00th=[61604], 99.50th=[67634], 99.90th=[73925], 99.95th=[73925], 00:29:12.709 | 99.99th=[74974] 00:29:12.709 write: IOPS=4017, BW=15.7MiB/s (16.5MB/s)(15.9MiB/1015msec); 0 zone resets 00:29:12.709 slat (nsec): min=1701, max=19919k, avg=108023.96, stdev=666724.80 00:29:12.709 clat (usec): min=2594, max=74413, avg=16991.13, stdev=8425.02 00:29:12.709 lat (usec): min=2597, max=74415, avg=17099.15, stdev=8452.93 00:29:12.709 clat percentiles (usec): 00:29:12.709 | 1.00th=[ 4178], 5.00th=[ 7570], 10.00th=[ 8717], 20.00th=[ 9765], 00:29:12.709 | 30.00th=[12518], 40.00th=[16057], 50.00th=[16909], 60.00th=[17695], 00:29:12.709 | 70.00th=[17695], 80.00th=[19006], 90.00th=[26608], 95.00th=[36439], 00:29:12.709 | 99.00th=[44303], 99.50th=[54264], 99.90th=[65799], 99.95th=[65799], 00:29:12.709 | 99.99th=[73925] 00:29:12.709 bw ( KiB/s): min=15224, max=16384, per=18.61%, avg=15804.00, stdev=820.24, samples=2 00:29:12.709 iops : min= 3806, max= 4096, avg=3951.00, stdev=205.06, samples=2 00:29:12.709 lat (msec) : 4=0.31%, 10=18.82%, 20=61.08%, 50=18.27%, 100=1.51% 00:29:12.709 cpu : usr=1.78%, sys=3.65%, ctx=402, majf=0, minf=1 00:29:12.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:12.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.709 issued rwts: total=3584,4078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.709 job1: (groupid=0, jobs=1): err= 0: pid=3992791: Tue Nov 26 19:35:46 2024 00:29:12.709 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:29:12.709 slat (nsec): min=927, max=18092k, avg=102145.98, stdev=771372.26 00:29:12.709 clat (usec): min=2998, max=37824, avg=12367.69, stdev=7235.81 00:29:12.709 lat (usec): min=3002, max=37829, avg=12469.83, stdev=7279.49 00:29:12.709 clat percentiles (usec): 00:29:12.709 | 1.00th=[ 3589], 5.00th=[ 6194], 10.00th=[ 6783], 20.00th=[ 6980], 00:29:12.709 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 9372], 60.00th=[11994], 00:29:12.709 | 70.00th=[13304], 80.00th=[16450], 90.00th=[22152], 95.00th=[29230], 00:29:12.709 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:29:12.709 | 99.99th=[38011] 00:29:12.709 write: IOPS=4001, BW=15.6MiB/s (16.4MB/s)(15.9MiB/1015msec); 0 zone resets 00:29:12.709 slat (nsec): min=1615, max=15684k, avg=152195.41, stdev=846789.07 00:29:12.709 clat (usec): min=1075, max=101418, avg=20783.10, stdev=17656.26 00:29:12.709 lat (usec): min=1085, max=101426, avg=20935.30, stdev=17750.46 00:29:12.709 clat percentiles (msec): 00:29:12.709 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 10], 00:29:12.709 | 30.00th=[ 13], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 18], 00:29:12.709 | 70.00th=[ 18], 80.00th=[ 23], 90.00th=[ 51], 95.00th=[ 61], 00:29:12.709 | 99.00th=[ 95], 99.50th=[ 97], 99.90th=[ 100], 99.95th=[ 102], 00:29:12.709 | 99.99th=[ 102] 00:29:12.709 bw ( KiB/s): min=12784, max=18696, per=18.53%, avg=15740.00, stdev=4180.42, samples=2 00:29:12.709 iops : min= 3196, max= 4674, avg=3935.00, stdev=1045.10, samples=2 00:29:12.709 lat (msec) : 2=0.05%, 4=1.26%, 10=35.67%, 20=45.55%, 50=12.10% 00:29:12.709 lat (msec) : 100=5.34%, 250=0.04% 00:29:12.709 cpu : usr=2.76%, sys=2.47%, ctx=397, majf=0, minf=2 00:29:12.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:12.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.709 issued rwts: total=3584,4062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.709 job2: (groupid=0, jobs=1): err= 0: pid=3992815: Tue Nov 26 19:35:46 2024 00:29:12.709 read: IOPS=9689, BW=37.8MiB/s (39.7MB/s)(38.0MiB/1004msec) 00:29:12.709 slat (nsec): min=975, max=6492.8k, avg=54536.97, stdev=435796.55 00:29:12.709 clat (usec): min=2408, max=15365, avg=7022.13, stdev=1719.13 00:29:12.709 lat (usec): min=2411, max=15368, avg=7076.66, stdev=1748.81 00:29:12.709 clat percentiles (usec): 00:29:12.709 | 1.00th=[ 3752], 5.00th=[ 5145], 10.00th=[ 5538], 20.00th=[ 5735], 00:29:12.709 | 30.00th=[ 5997], 40.00th=[ 6194], 50.00th=[ 6390], 60.00th=[ 6849], 00:29:12.709 | 70.00th=[ 7635], 80.00th=[ 8356], 90.00th=[ 9765], 95.00th=[10552], 00:29:12.709 | 99.00th=[11469], 99.50th=[11731], 99.90th=[15401], 99.95th=[15401], 00:29:12.709 | 99.99th=[15401] 00:29:12.709 write: IOPS=9789, BW=38.2MiB/s (40.1MB/s)(38.4MiB/1004msec); 0 zone resets 00:29:12.709 slat (nsec): min=1699, max=5539.9k, avg=45228.54, stdev=328689.11 00:29:12.709 clat (usec): min=1266, max=12364, avg=6005.44, stdev=1424.87 00:29:12.709 lat (usec): min=1450, max=12368, avg=6050.67, stdev=1432.29 00:29:12.709 clat percentiles (usec): 00:29:12.709 | 1.00th=[ 2606], 5.00th=[ 3884], 10.00th=[ 4228], 20.00th=[ 4817], 00:29:12.709 | 30.00th=[ 5276], 40.00th=[ 5800], 50.00th=[ 6194], 60.00th=[ 6456], 00:29:12.709 | 70.00th=[ 6587], 80.00th=[ 6652], 90.00th=[ 8094], 95.00th=[ 8979], 00:29:12.709 | 99.00th=[ 9765], 99.50th=[10028], 99.90th=[11731], 99.95th=[12125], 00:29:12.709 | 99.99th=[12387] 00:29:12.709 bw ( KiB/s): min=36928, max=40944, per=45.84%, avg=38936.00, stdev=2839.74, samples=2 00:29:12.709 iops : min= 9232, max=10236, avg=9734.00, stdev=709.94, samples=2 00:29:12.709 lat (msec) : 2=0.22%, 4=3.30%, 10=91.91%, 20=4.58% 00:29:12.709 cpu : usr=4.99%, sys=4.39%, ctx=708, majf=0, minf=2 00:29:12.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:29:12.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.709 issued rwts: total=9728,9829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.709 job3: (groupid=0, jobs=1): err= 0: pid=3992824: Tue Nov 26 19:35:46 2024 00:29:12.709 read: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1009msec) 00:29:12.709 slat (nsec): min=945, max=13668k, avg=151738.93, stdev=946136.87 00:29:12.709 clat (usec): min=6706, max=71584, avg=16521.29, stdev=9272.80 00:29:12.709 lat (usec): min=6711, max=71592, avg=16673.03, stdev=9354.56 00:29:12.709 clat percentiles (usec): 00:29:12.709 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[10945], 20.00th=[11863], 00:29:12.709 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13173], 60.00th=[14353], 00:29:12.709 | 70.00th=[15795], 80.00th=[17171], 90.00th=[23200], 95.00th=[37487], 00:29:12.709 | 99.00th=[61080], 99.50th=[65799], 99.90th=[71828], 99.95th=[71828], 00:29:12.709 | 99.99th=[71828] 00:29:12.709 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:29:12.709 slat (nsec): min=1653, max=18512k, avg=140498.35, stdev=789703.27 00:29:12.709 clat (msec): min=4, max=101, avg=21.08, stdev=14.57 00:29:12.709 lat (msec): min=4, max=103, avg=21.22, stdev=14.64 00:29:12.709 clat percentiles (msec): 00:29:12.709 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 14], 00:29:12.709 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 18], 00:29:12.709 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 36], 95.00th=[ 47], 00:29:12.709 | 99.00th=[ 96], 99.50th=[ 99], 99.90th=[ 103], 99.95th=[ 103], 00:29:12.709 | 99.99th=[ 103] 00:29:12.709 bw ( KiB/s): min=12080, max=16384, per=16.76%, avg=14232.00, stdev=3043.39, samples=2 00:29:12.709 iops : min= 3020, max= 4096, avg=3558.00, stdev=760.85, samples=2 00:29:12.709 lat (msec) : 10=4.59%, 20=75.76%, 50=16.25%, 100=3.18%, 250=0.22% 00:29:12.709 cpu : usr=1.88%, sys=3.17%, ctx=398, majf=0, minf=2 00:29:12.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:29:12.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:12.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:12.709 issued rwts: total=3173,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:12.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:12.709 00:29:12.709 Run status group 0 (all jobs): 00:29:12.709 READ: bw=77.2MiB/s (81.0MB/s), 12.3MiB/s-37.8MiB/s (12.9MB/s-39.7MB/s), io=78.4MiB (82.2MB), run=1004-1015msec 00:29:12.709 WRITE: bw=82.9MiB/s (87.0MB/s), 13.9MiB/s-38.2MiB/s (14.5MB/s-40.1MB/s), io=84.2MiB (88.3MB), run=1004-1015msec 00:29:12.709 00:29:12.709 Disk stats (read/write): 00:29:12.709 nvme0n1: ios=2961/3072, merge=0/0, ticks=50656/54160, in_queue=104816, util=97.90% 00:29:12.709 nvme0n2: ios=2944/3072, merge=0/0, ticks=34698/68564, in_queue=103262, util=96.13% 00:29:12.709 nvme0n3: ios=8079/8192, merge=0/0, ticks=55378/47467, in_queue=102845, util=97.05% 00:29:12.709 nvme0n4: ios=3000/3072, merge=0/0, ticks=48587/54707, in_queue=103294, util=89.43% 00:29:12.709 19:35:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:12.710 [global] 00:29:12.710 thread=1 00:29:12.710 invalidate=1 00:29:12.710 rw=randwrite 00:29:12.710 time_based=1 00:29:12.710 runtime=1 00:29:12.710 ioengine=libaio 00:29:12.710 direct=1 00:29:12.710 bs=4096 00:29:12.710 iodepth=128 00:29:12.710 norandommap=0 00:29:12.710 numjobs=1 00:29:12.710 00:29:12.710 verify_dump=1 00:29:12.710 verify_backlog=512 00:29:12.710 verify_state_save=0 00:29:12.710 do_verify=1 00:29:12.710 verify=crc32c-intel 00:29:12.710 [job0] 00:29:12.710 filename=/dev/nvme0n1 00:29:12.710 [job1] 00:29:12.710 filename=/dev/nvme0n2 00:29:12.710 [job2] 00:29:12.710 filename=/dev/nvme0n3 00:29:12.710 [job3] 00:29:12.710 filename=/dev/nvme0n4 00:29:12.710 Could not set queue depth (nvme0n1) 00:29:12.710 Could not set queue depth (nvme0n2) 00:29:12.710 Could not set queue depth (nvme0n3) 00:29:12.710 Could not set queue depth (nvme0n4) 00:29:12.710 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:12.710 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:12.710 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:12.710 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:12.710 fio-3.35 00:29:12.710 Starting 4 threads 00:29:14.090 00:29:14.090 job0: (groupid=0, jobs=1): err= 0: pid=3993293: Tue Nov 26 19:35:47 2024 00:29:14.090 read: IOPS=6478, BW=25.3MiB/s (26.5MB/s)(26.5MiB/1047msec) 00:29:14.090 slat (nsec): min=943, max=9892.5k, avg=75713.33, stdev=552246.59 00:29:14.090 clat (usec): min=1982, max=60164, avg=9927.18, stdev=8201.36 00:29:14.090 lat (usec): min=1988, max=60173, avg=10002.90, stdev=8243.90 00:29:14.090 clat percentiles (usec): 00:29:14.090 | 1.00th=[ 3130], 5.00th=[ 4752], 10.00th=[ 5014], 20.00th=[ 5866], 00:29:14.090 | 30.00th=[ 6521], 40.00th=[ 7111], 50.00th=[ 7635], 60.00th=[ 8455], 00:29:14.090 | 70.00th=[ 9765], 80.00th=[11076], 90.00th=[14222], 95.00th=[23462], 00:29:14.090 | 99.00th=[49546], 99.50th=[51643], 99.90th=[57410], 99.95th=[60031], 00:29:14.090 | 99.99th=[60031] 00:29:14.090 write: IOPS=6846, BW=26.7MiB/s (28.0MB/s)(28.0MiB/1047msec); 0 zone resets 00:29:14.090 slat (nsec): min=1619, max=23525k, avg=64797.74, stdev=569083.48 00:29:14.090 clat (usec): min=1223, max=67560, avg=8728.77, stdev=8726.81 00:29:14.090 lat (usec): min=1226, max=67566, avg=8793.57, stdev=8775.17 00:29:14.090 clat percentiles (usec): 00:29:14.090 | 1.00th=[ 2671], 5.00th=[ 3851], 10.00th=[ 4555], 20.00th=[ 5014], 00:29:14.090 | 30.00th=[ 5735], 40.00th=[ 6128], 50.00th=[ 6652], 60.00th=[ 7046], 00:29:14.090 | 70.00th=[ 7439], 80.00th=[ 8979], 90.00th=[13173], 95.00th=[24249], 00:29:14.090 | 99.00th=[54789], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:29:14.090 | 99.99th=[67634] 00:29:14.090 bw ( KiB/s): min=27824, max=29512, per=31.86%, avg=28668.00, stdev=1193.60, samples=2 00:29:14.090 iops : min= 6956, max= 7378, avg=7167.00, stdev=298.40, samples=2 00:29:14.090 lat (msec) : 2=0.19%, 4=4.67%, 10=74.77%, 20=14.39%, 50=4.83% 00:29:14.090 lat (msec) : 100=1.15% 00:29:14.090 cpu : usr=3.06%, sys=3.54%, ctx=554, majf=0, minf=1 00:29:14.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:29:14.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:14.090 issued rwts: total=6783,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:14.090 job1: (groupid=0, jobs=1): err= 0: pid=3993300: Tue Nov 26 19:35:47 2024 00:29:14.090 read: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1005msec) 00:29:14.090 slat (nsec): min=895, max=20303k, avg=61641.14, stdev=611647.30 00:29:14.090 clat (usec): min=697, max=62915, avg=8573.83, stdev=4972.40 00:29:14.090 lat (usec): min=702, max=62918, avg=8635.47, stdev=5038.11 00:29:14.090 clat percentiles (usec): 00:29:14.090 | 1.00th=[ 3523], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 6259], 00:29:14.090 | 30.00th=[ 6587], 40.00th=[ 6980], 50.00th=[ 7242], 60.00th=[ 7570], 00:29:14.090 | 70.00th=[ 8356], 80.00th=[10290], 90.00th=[12256], 95.00th=[15139], 00:29:14.090 | 99.00th=[32113], 99.50th=[39060], 99.90th=[46924], 99.95th=[62653], 00:29:14.090 | 99.99th=[63177] 00:29:14.090 write: IOPS=7132, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1005msec); 0 zone resets 00:29:14.090 slat (nsec): min=1561, max=11943k, avg=60361.19, stdev=482208.06 00:29:14.090 clat (usec): min=1107, max=65842, avg=9239.68, stdev=8871.49 00:29:14.090 lat (usec): min=1118, max=65850, avg=9300.05, stdev=8901.40 00:29:14.090 clat percentiles (usec): 00:29:14.090 | 1.00th=[ 2278], 5.00th=[ 3654], 10.00th=[ 4080], 20.00th=[ 5014], 00:29:14.090 | 30.00th=[ 5669], 40.00th=[ 6390], 50.00th=[ 7046], 60.00th=[ 7308], 00:29:14.090 | 70.00th=[ 7767], 80.00th=[ 9503], 90.00th=[14746], 95.00th=[26870], 00:29:14.090 | 99.00th=[49546], 99.50th=[64226], 99.90th=[65799], 99.95th=[65799], 00:29:14.090 | 99.99th=[65799] 00:29:14.090 bw ( KiB/s): min=25128, max=32216, per=31.87%, avg=28672.00, stdev=5011.97, samples=2 00:29:14.090 iops : min= 6282, max= 8054, avg=7168.00, stdev=1252.99, samples=2 00:29:14.090 lat (usec) : 750=0.02%, 1000=0.02% 00:29:14.090 lat (msec) : 2=0.35%, 4=5.75%, 10=74.10%, 20=14.49%, 50=4.83% 00:29:14.090 lat (msec) : 100=0.44% 00:29:14.090 cpu : usr=4.08%, sys=4.98%, ctx=499, majf=0, minf=2 00:29:14.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:29:14.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:14.090 issued rwts: total=7161,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:14.090 job2: (groupid=0, jobs=1): err= 0: pid=3993310: Tue Nov 26 19:35:47 2024 00:29:14.090 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:29:14.090 slat (nsec): min=1001, max=13027k, avg=93052.56, stdev=679921.10 00:29:14.090 clat (usec): min=2217, max=35764, avg=10867.61, stdev=5038.40 00:29:14.090 lat (usec): min=2237, max=35768, avg=10960.67, stdev=5088.67 00:29:14.090 clat percentiles (usec): 00:29:14.090 | 1.00th=[ 5145], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7504], 00:29:14.090 | 30.00th=[ 7701], 40.00th=[ 8160], 50.00th=[ 9110], 60.00th=[ 9634], 00:29:14.090 | 70.00th=[11338], 80.00th=[14615], 90.00th=[18220], 95.00th=[20841], 00:29:14.090 | 99.00th=[29230], 99.50th=[30016], 99.90th=[34866], 99.95th=[35914], 00:29:14.090 | 99.99th=[35914] 00:29:14.090 write: IOPS=5581, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1009msec); 0 zone resets 00:29:14.090 slat (nsec): min=1650, max=12906k, avg=88983.97, stdev=547858.26 00:29:14.090 clat (usec): min=2149, max=35752, avg=12818.34, stdev=6234.99 00:29:14.090 lat (usec): min=2157, max=35754, avg=12907.33, stdev=6280.01 00:29:14.090 clat percentiles (usec): 00:29:14.090 | 1.00th=[ 3818], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6915], 00:29:14.090 | 30.00th=[ 7832], 40.00th=[ 9765], 50.00th=[11731], 60.00th=[13304], 00:29:14.090 | 70.00th=[16188], 80.00th=[19006], 90.00th=[21890], 95.00th=[23725], 00:29:14.090 | 99.00th=[28181], 99.50th=[29754], 99.90th=[30802], 99.95th=[30802], 00:29:14.090 | 99.99th=[35914] 00:29:14.090 bw ( KiB/s): min=20480, max=23592, per=24.49%, avg=22036.00, stdev=2200.52, samples=2 00:29:14.090 iops : min= 5120, max= 5898, avg=5509.00, stdev=550.13, samples=2 00:29:14.090 lat (msec) : 4=0.96%, 10=51.34%, 20=35.43%, 50=12.27% 00:29:14.090 cpu : usr=3.77%, sys=4.27%, ctx=467, majf=0, minf=2 00:29:14.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:29:14.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:14.090 issued rwts: total=5125,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:14.090 job3: (groupid=0, jobs=1): err= 0: pid=3993316: Tue Nov 26 19:35:47 2024 00:29:14.090 read: IOPS=3390, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1006msec) 00:29:14.090 slat (nsec): min=935, max=15169k, avg=104793.58, stdev=766311.00 00:29:14.090 clat (usec): min=1975, max=50116, avg=13881.91, stdev=5869.56 00:29:14.090 lat (usec): min=5390, max=58241, avg=13986.70, stdev=5937.42 00:29:14.090 clat percentiles (usec): 00:29:14.090 | 1.00th=[ 6194], 5.00th=[ 7242], 10.00th=[ 8586], 20.00th=[ 9372], 00:29:14.090 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[13042], 60.00th=[14353], 00:29:14.090 | 70.00th=[15795], 80.00th=[16712], 90.00th=[20317], 95.00th=[25822], 00:29:14.090 | 99.00th=[31851], 99.50th=[38536], 99.90th=[50070], 99.95th=[50070], 00:29:14.090 | 99.99th=[50070] 00:29:14.090 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:29:14.090 slat (nsec): min=1525, max=17435k, avg=174111.36, stdev=918222.30 00:29:14.090 clat (usec): min=1201, max=86254, avg=22399.70, stdev=18458.95 00:29:14.090 lat (usec): min=1212, max=86262, avg=22573.81, stdev=18573.17 00:29:14.090 clat percentiles (usec): 00:29:14.090 | 1.00th=[ 2966], 5.00th=[ 6390], 10.00th=[ 8029], 20.00th=[11076], 00:29:14.090 | 30.00th=[11469], 40.00th=[13042], 50.00th=[13960], 60.00th=[16450], 00:29:14.090 | 70.00th=[20841], 80.00th=[31851], 90.00th=[53216], 95.00th=[65274], 00:29:14.090 | 99.00th=[81265], 99.50th=[84411], 99.90th=[86508], 99.95th=[86508], 00:29:14.090 | 99.99th=[86508] 00:29:14.090 bw ( KiB/s): min=12288, max=16384, per=15.93%, avg=14336.00, stdev=2896.31, samples=2 00:29:14.090 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:29:14.090 lat (msec) : 2=0.11%, 4=0.91%, 10=21.29%, 20=55.54%, 50=15.78% 00:29:14.090 lat (msec) : 100=6.36% 00:29:14.090 cpu : usr=1.99%, sys=3.18%, ctx=372, majf=0, minf=1 00:29:14.090 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:29:14.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:14.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:14.090 issued rwts: total=3411,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:14.090 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:14.090 00:29:14.090 Run status group 0 (all jobs): 00:29:14.091 READ: bw=83.9MiB/s (87.9MB/s), 13.2MiB/s-27.8MiB/s (13.9MB/s-29.2MB/s), io=87.8MiB (92.1MB), run=1005-1047msec 00:29:14.091 WRITE: bw=87.9MiB/s (92.1MB/s), 13.9MiB/s-27.9MiB/s (14.6MB/s-29.2MB/s), io=92.0MiB (96.5MB), run=1005-1047msec 00:29:14.091 00:29:14.091 Disk stats (read/write): 00:29:14.091 nvme0n1: ios=5654/6059, merge=0/0, ticks=49882/48345, in_queue=98227, util=97.70% 00:29:14.091 nvme0n2: ios=5674/6143, merge=0/0, ticks=43788/48703, in_queue=92491, util=88.43% 00:29:14.091 nvme0n3: ios=4204/4608, merge=0/0, ticks=46867/58713, in_queue=105580, util=98.34% 00:29:14.091 nvme0n4: ios=3099/3167, merge=0/0, ticks=28245/39673, in_queue=67918, util=96.22% 00:29:14.091 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:14.091 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3993614 00:29:14.091 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:14.091 19:35:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:14.091 [global] 00:29:14.091 thread=1 00:29:14.091 invalidate=1 00:29:14.091 rw=read 00:29:14.091 time_based=1 00:29:14.091 runtime=10 00:29:14.091 ioengine=libaio 00:29:14.091 direct=1 00:29:14.091 bs=4096 00:29:14.091 iodepth=1 00:29:14.091 norandommap=1 00:29:14.091 numjobs=1 00:29:14.091 00:29:14.091 [job0] 00:29:14.091 filename=/dev/nvme0n1 00:29:14.091 [job1] 00:29:14.091 filename=/dev/nvme0n2 00:29:14.091 [job2] 00:29:14.091 filename=/dev/nvme0n3 00:29:14.091 [job3] 00:29:14.091 filename=/dev/nvme0n4 00:29:14.091 Could not set queue depth (nvme0n1) 00:29:14.091 Could not set queue depth (nvme0n2) 00:29:14.091 Could not set queue depth (nvme0n3) 00:29:14.091 Could not set queue depth (nvme0n4) 00:29:14.350 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:14.350 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:14.350 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:14.350 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:14.350 fio-3.35 00:29:14.350 Starting 4 threads 00:29:17.637 19:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:17.637 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3444736, buflen=4096 00:29:17.637 fio: pid=3993829, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:17.637 19:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:17.637 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:17.637 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:17.637 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=319488, buflen=4096 00:29:17.637 fio: pid=3993825, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:17.637 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11464704, buflen=4096 00:29:17.637 fio: pid=3993817, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:17.637 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:17.637 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:17.637 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:17.637 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:17.637 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13090816, buflen=4096 00:29:17.637 fio: pid=3993821, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:29:17.637 00:29:17.637 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3993817: Tue Nov 26 19:35:51 2024 00:29:17.637 read: IOPS=934, BW=3737KiB/s (3827kB/s)(10.9MiB/2996msec) 00:29:17.637 slat (usec): min=3, max=10036, avg=22.39, stdev=215.20 00:29:17.637 clat (usec): min=464, max=1519, avg=1036.40, stdev=122.59 00:29:17.637 lat (usec): min=478, max=11147, avg=1058.79, stdev=248.80 00:29:17.637 clat percentiles (usec): 00:29:17.637 | 1.00th=[ 750], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 938], 00:29:17.637 | 30.00th=[ 963], 40.00th=[ 996], 50.00th=[ 1029], 60.00th=[ 1074], 00:29:17.637 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1205], 95.00th=[ 1237], 00:29:17.637 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1401], 99.95th=[ 1418], 00:29:17.637 | 99.99th=[ 1516] 00:29:17.637 bw ( KiB/s): min= 3552, max= 3840, per=42.40%, avg=3710.40, stdev=118.85, samples=5 00:29:17.637 iops : min= 888, max= 960, avg=927.60, stdev=29.71, samples=5 00:29:17.637 lat (usec) : 500=0.04%, 750=0.93%, 1000=40.93% 00:29:17.637 lat (msec) : 2=58.07% 00:29:17.637 cpu : usr=0.83%, sys=2.27%, ctx=2803, majf=0, minf=2 00:29:17.637 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:17.637 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.637 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.638 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:17.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:17.638 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3993821: Tue Nov 26 19:35:51 2024 00:29:17.638 read: IOPS=1011, BW=4044KiB/s (4141kB/s)(12.5MiB/3161msec) 00:29:17.638 slat (usec): min=2, max=11009, avg=22.17, stdev=268.64 00:29:17.638 clat (usec): min=430, max=1480, avg=956.13, stdev=95.66 00:29:17.638 lat (usec): min=434, max=11964, avg=978.31, stdev=285.37 00:29:17.638 clat percentiles (usec): 00:29:17.638 | 1.00th=[ 668], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 889], 00:29:17.638 | 30.00th=[ 922], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 988], 00:29:17.638 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1090], 00:29:17.638 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1287], 99.95th=[ 1434], 00:29:17.638 | 99.99th=[ 1483] 00:29:17.638 bw ( KiB/s): min= 3976, max= 4232, per=46.54%, avg=4072.83, stdev=87.53, samples=6 00:29:17.638 iops : min= 994, max= 1058, avg=1018.17, stdev=21.89, samples=6 00:29:17.638 lat (usec) : 500=0.09%, 750=2.82%, 1000=65.12% 00:29:17.638 lat (msec) : 2=31.94% 00:29:17.638 cpu : usr=0.63%, sys=1.96%, ctx=3199, majf=0, minf=1 00:29:17.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:17.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.638 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.638 issued rwts: total=3197,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:17.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:17.638 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3993825: Tue Nov 26 19:35:51 2024 00:29:17.638 read: IOPS=27, BW=110KiB/s (113kB/s)(312KiB/2837msec) 00:29:17.638 slat (nsec): min=10977, max=91397, avg=25982.43, stdev=8825.34 00:29:17.638 clat (usec): min=973, max=42187, avg=36069.44, stdev=14254.75 00:29:17.638 lat (usec): min=1000, max=42214, avg=36095.40, stdev=14255.92 00:29:17.638 clat percentiles (usec): 00:29:17.638 | 1.00th=[ 971], 5.00th=[ 1045], 10.00th=[ 1090], 20.00th=[41157], 00:29:17.638 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:29:17.638 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:17.638 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:17.638 | 99.99th=[42206] 00:29:17.638 bw ( KiB/s): min= 96, max= 128, per=1.26%, avg=110.40, stdev=14.31, samples=5 00:29:17.638 iops : min= 24, max= 32, avg=27.60, stdev= 3.58, samples=5 00:29:17.638 lat (usec) : 1000=1.27% 00:29:17.638 lat (msec) : 2=12.66%, 50=84.81% 00:29:17.638 cpu : usr=0.14%, sys=0.00%, ctx=80, majf=0, minf=2 00:29:17.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:17.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.638 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.638 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:17.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:17.638 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3993829: Tue Nov 26 19:35:51 2024 00:29:17.638 read: IOPS=315, BW=1259KiB/s (1289kB/s)(3364KiB/2672msec) 00:29:17.638 slat (nsec): min=3392, max=45801, avg=18848.81, stdev=4370.73 00:29:17.638 clat (usec): min=612, max=42144, avg=3128.33, stdev=8936.03 00:29:17.638 lat (usec): min=626, max=42170, avg=3147.18, stdev=8937.37 00:29:17.638 clat percentiles (usec): 00:29:17.638 | 1.00th=[ 725], 5.00th=[ 848], 10.00th=[ 930], 20.00th=[ 988], 00:29:17.638 | 30.00th=[ 1020], 40.00th=[ 1045], 50.00th=[ 1074], 60.00th=[ 1090], 00:29:17.638 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[40633], 00:29:17.638 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:17.638 | 99.99th=[42206] 00:29:17.638 bw ( KiB/s): min= 96, max= 3720, per=12.74%, avg=1115.20, stdev=1587.68, samples=5 00:29:17.638 iops : min= 24, max= 930, avg=278.80, stdev=396.92, samples=5 00:29:17.638 lat (usec) : 750=1.43%, 1000=23.04% 00:29:17.638 lat (msec) : 2=70.31%, 50=5.11% 00:29:17.638 cpu : usr=0.37%, sys=1.01%, ctx=843, majf=0, minf=2 00:29:17.638 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:17.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.638 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:17.638 issued rwts: total=842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:17.638 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:17.638 00:29:17.638 Run status group 0 (all jobs): 00:29:17.638 READ: bw=8749KiB/s (8959kB/s), 110KiB/s-4044KiB/s (113kB/s-4141kB/s), io=27.0MiB (28.3MB), run=2672-3161msec 00:29:17.638 00:29:17.638 Disk stats (read/write): 00:29:17.638 nvme0n1: ios=2706/0, merge=0/0, ticks=2618/0, in_queue=2618, util=95.33% 00:29:17.638 nvme0n2: ios=3158/0, merge=0/0, ticks=2926/0, in_queue=2926, util=95.45% 00:29:17.638 nvme0n3: ios=71/0, merge=0/0, ticks=2564/0, in_queue=2564, util=96.16% 00:29:17.638 nvme0n4: ios=799/0, merge=0/0, ticks=2516/0, in_queue=2516, util=96.45% 00:29:17.896 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:17.896 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:17.896 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:17.896 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:18.155 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:18.155 19:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:18.414 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:18.414 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:18.414 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:18.414 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3993614 00:29:18.414 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:18.414 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:18.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:18.672 nvmf hotplug test: fio failed as expected 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.672 rmmod nvme_tcp 00:29:18.672 rmmod nvme_fabrics 00:29:18.672 rmmod nvme_keyring 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3990155 ']' 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3990155 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3990155 ']' 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3990155 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.672 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3990155 00:29:18.931 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3990155' 00:29:18.932 killing process with pid 3990155 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3990155 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3990155 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.932 19:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.469 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.469 00:29:21.469 real 0m24.687s 00:29:21.469 user 2m5.654s 00:29:21.469 sys 0m9.743s 00:29:21.469 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:21.469 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.469 ************************************ 00:29:21.469 END TEST nvmf_fio_target 00:29:21.469 ************************************ 00:29:21.469 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:21.469 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:21.470 ************************************ 00:29:21.470 START TEST nvmf_bdevio 00:29:21.470 ************************************ 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:21.470 * Looking for test storage... 00:29:21.470 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:21.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.470 --rc genhtml_branch_coverage=1 00:29:21.470 --rc genhtml_function_coverage=1 00:29:21.470 --rc genhtml_legend=1 00:29:21.470 --rc geninfo_all_blocks=1 00:29:21.470 --rc geninfo_unexecuted_blocks=1 00:29:21.470 00:29:21.470 ' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:21.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.470 --rc genhtml_branch_coverage=1 00:29:21.470 --rc genhtml_function_coverage=1 00:29:21.470 --rc genhtml_legend=1 00:29:21.470 --rc geninfo_all_blocks=1 00:29:21.470 --rc geninfo_unexecuted_blocks=1 00:29:21.470 00:29:21.470 ' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:21.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.470 --rc genhtml_branch_coverage=1 00:29:21.470 --rc genhtml_function_coverage=1 00:29:21.470 --rc genhtml_legend=1 00:29:21.470 --rc geninfo_all_blocks=1 00:29:21.470 --rc geninfo_unexecuted_blocks=1 00:29:21.470 00:29:21.470 ' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:21.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:21.470 --rc genhtml_branch_coverage=1 00:29:21.470 --rc genhtml_function_coverage=1 00:29:21.470 --rc genhtml_legend=1 00:29:21.470 --rc geninfo_all_blocks=1 00:29:21.470 --rc geninfo_unexecuted_blocks=1 00:29:21.470 00:29:21.470 ' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:21.470 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:29:21.471 19:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:26.754 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:26.754 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.754 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:26.755 Found net devices under 0000:31:00.0: cvl_0_0 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:26.755 Found net devices under 0000:31:00.1: cvl_0_1 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.755 19:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:29:26.755 00:29:26.755 --- 10.0.0.2 ping statistics --- 00:29:26.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.755 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:29:26.755 00:29:26.755 --- 10.0.0.1 ping statistics --- 00:29:26.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.755 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3999171 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3999171 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3999171 ']' 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:29:26.755 [2024-11-26 19:36:00.209796] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:26.755 [2024-11-26 19:36:00.210782] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:29:26.755 [2024-11-26 19:36:00.210818] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.755 [2024-11-26 19:36:00.282877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.755 [2024-11-26 19:36:00.311434] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.755 [2024-11-26 19:36:00.311463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.755 [2024-11-26 19:36:00.311469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.755 [2024-11-26 19:36:00.311475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.755 [2024-11-26 19:36:00.311479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.755 [2024-11-26 19:36:00.312943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:26.755 [2024-11-26 19:36:00.313094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:26.755 [2024-11-26 19:36:00.313259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:26.755 [2024-11-26 19:36:00.313353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.755 [2024-11-26 19:36:00.363363] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:26.755 [2024-11-26 19:36:00.364268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:26.755 [2024-11-26 19:36:00.364980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:26.755 [2024-11-26 19:36:00.365158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:26.755 [2024-11-26 19:36:00.365166] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.755 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.756 [2024-11-26 19:36:00.418088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.756 Malloc0 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:26.756 [2024-11-26 19:36:00.481870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:26.756 { 00:29:26.756 "params": { 00:29:26.756 "name": "Nvme$subsystem", 00:29:26.756 "trtype": "$TEST_TRANSPORT", 00:29:26.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:26.756 "adrfam": "ipv4", 00:29:26.756 "trsvcid": "$NVMF_PORT", 00:29:26.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:26.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:26.756 "hdgst": ${hdgst:-false}, 00:29:26.756 "ddgst": ${ddgst:-false} 00:29:26.756 }, 00:29:26.756 "method": "bdev_nvme_attach_controller" 00:29:26.756 } 00:29:26.756 EOF 00:29:26.756 )") 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:29:26.756 19:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:26.756 "params": { 00:29:26.756 "name": "Nvme1", 00:29:26.756 "trtype": "tcp", 00:29:26.756 "traddr": "10.0.0.2", 00:29:26.756 "adrfam": "ipv4", 00:29:26.756 "trsvcid": "4420", 00:29:26.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:26.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:26.756 "hdgst": false, 00:29:26.756 "ddgst": false 00:29:26.756 }, 00:29:26.756 "method": "bdev_nvme_attach_controller" 00:29:26.756 }' 00:29:26.756 [2024-11-26 19:36:00.519882] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:29:26.756 [2024-11-26 19:36:00.519931] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3999215 ] 00:29:26.756 [2024-11-26 19:36:00.603204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:27.017 [2024-11-26 19:36:00.643894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.017 [2024-11-26 19:36:00.643995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.017 [2024-11-26 19:36:00.643997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.017 I/O targets: 00:29:27.017 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:27.017 00:29:27.017 00:29:27.017 CUnit - A unit testing framework for C - Version 2.1-3 00:29:27.017 http://cunit.sourceforge.net/ 00:29:27.017 00:29:27.017 00:29:27.017 Suite: bdevio tests on: Nvme1n1 00:29:27.017 Test: blockdev write read block ...passed 00:29:27.276 Test: blockdev write zeroes read block ...passed 00:29:27.276 Test: blockdev write zeroes read no split ...passed 00:29:27.276 Test: blockdev write zeroes read split ...passed 00:29:27.276 Test: blockdev write zeroes read split partial ...passed 00:29:27.276 Test: blockdev reset ...[2024-11-26 19:36:00.983274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:27.276 [2024-11-26 19:36:00.983346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce34b0 (9): Bad file descriptor 00:29:27.276 [2024-11-26 19:36:00.989687] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:29:27.276 passed 00:29:27.276 Test: blockdev write read 8 blocks ...passed 00:29:27.276 Test: blockdev write read size > 128k ...passed 00:29:27.276 Test: blockdev write read invalid size ...passed 00:29:27.276 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:27.276 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:27.276 Test: blockdev write read max offset ...passed 00:29:27.276 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:27.276 Test: blockdev writev readv 8 blocks ...passed 00:29:27.276 Test: blockdev writev readv 30 x 1block ...passed 00:29:27.536 Test: blockdev writev readv block ...passed 00:29:27.536 Test: blockdev writev readv size > 128k ...passed 00:29:27.536 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:27.536 Test: blockdev comparev and writev ...[2024-11-26 19:36:01.214765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:27.536 [2024-11-26 19:36:01.214791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.214802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:27.536 [2024-11-26 19:36:01.214809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.215343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:27.536 [2024-11-26 19:36:01.215353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.215363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:27.536 [2024-11-26 19:36:01.215368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.215910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:27.536 [2024-11-26 19:36:01.215919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.215932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:27.536 [2024-11-26 19:36:01.215937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.216429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:27.536 [2024-11-26 19:36:01.216438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.216447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:27.536 [2024-11-26 19:36:01.216452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:27.536 passed 00:29:27.536 Test: blockdev nvme passthru rw ...passed 00:29:27.536 Test: blockdev nvme passthru vendor specific ...[2024-11-26 19:36:01.301028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:27.536 [2024-11-26 19:36:01.301040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.301418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:27.536 [2024-11-26 19:36:01.301427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.301756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:27.536 [2024-11-26 19:36:01.301765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:27.536 [2024-11-26 19:36:01.302083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:27.536 [2024-11-26 19:36:01.302092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:27.536 passed 00:29:27.536 Test: blockdev nvme admin passthru ...passed 00:29:27.536 Test: blockdev copy ...passed 00:29:27.536 00:29:27.536 Run Summary: Type Total Ran Passed Failed Inactive 00:29:27.536 suites 1 1 n/a 0 0 00:29:27.536 tests 23 23 23 0 0 00:29:27.536 asserts 152 152 152 0 n/a 00:29:27.536 00:29:27.536 Elapsed time = 1.062 seconds 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:27.795 rmmod nvme_tcp 00:29:27.795 rmmod nvme_fabrics 00:29:27.795 rmmod nvme_keyring 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3999171 ']' 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3999171 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3999171 ']' 00:29:27.795 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3999171 00:29:27.796 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:29:27.796 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.796 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3999171 00:29:27.796 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:29:27.796 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:29:27.796 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3999171' 00:29:27.796 killing process with pid 3999171 00:29:27.796 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3999171 00:29:27.796 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3999171 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.056 19:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.960 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:29.960 00:29:29.960 real 0m8.960s 00:29:29.960 user 0m7.645s 00:29:29.960 sys 0m4.603s 00:29:29.960 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.960 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:29.960 ************************************ 00:29:29.960 END TEST nvmf_bdevio 00:29:29.960 ************************************ 00:29:29.960 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:29.960 00:29:29.960 real 4m22.159s 00:29:29.960 user 9m38.503s 00:29:29.960 sys 1m36.133s 00:29:29.960 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.960 19:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:29.960 ************************************ 00:29:29.960 END TEST nvmf_target_core_interrupt_mode 00:29:29.960 ************************************ 00:29:29.960 19:36:03 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:29.960 19:36:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:29.960 19:36:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.960 19:36:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:29.960 ************************************ 00:29:29.960 START TEST nvmf_interrupt 00:29:29.960 ************************************ 00:29:29.960 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:30.219 * Looking for test storage... 00:29:30.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.220 --rc genhtml_branch_coverage=1 00:29:30.220 --rc genhtml_function_coverage=1 00:29:30.220 --rc genhtml_legend=1 00:29:30.220 --rc geninfo_all_blocks=1 00:29:30.220 --rc geninfo_unexecuted_blocks=1 00:29:30.220 00:29:30.220 ' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.220 --rc genhtml_branch_coverage=1 00:29:30.220 --rc genhtml_function_coverage=1 00:29:30.220 --rc genhtml_legend=1 00:29:30.220 --rc geninfo_all_blocks=1 00:29:30.220 --rc geninfo_unexecuted_blocks=1 00:29:30.220 00:29:30.220 ' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.220 --rc genhtml_branch_coverage=1 00:29:30.220 --rc genhtml_function_coverage=1 00:29:30.220 --rc genhtml_legend=1 00:29:30.220 --rc geninfo_all_blocks=1 00:29:30.220 --rc geninfo_unexecuted_blocks=1 00:29:30.220 00:29:30.220 ' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:30.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.220 --rc genhtml_branch_coverage=1 00:29:30.220 --rc genhtml_function_coverage=1 00:29:30.220 --rc genhtml_legend=1 00:29:30.220 --rc geninfo_all_blocks=1 00:29:30.220 --rc geninfo_unexecuted_blocks=1 00:29:30.220 00:29:30.220 ' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:30.220 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:29:30.221 19:36:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:35.498 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:35.498 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.498 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:35.499 Found net devices under 0000:31:00.0: cvl_0_0 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:35.499 Found net devices under 0000:31:00.1: cvl_0_1 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.499 19:36:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.499 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.499 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.499 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.499 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.499 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.499 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.499 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.499 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.601 ms 00:29:35.500 00:29:35.500 --- 10.0.0.2 ping statistics --- 00:29:35.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.500 rtt min/avg/max/mdev = 0.601/0.601/0.601/0.000 ms 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:29:35.500 00:29:35.500 --- 10.0.0.1 ping statistics --- 00:29:35.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.500 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=4003967 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 4003967 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 4003967 ']' 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.500 19:36:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:35.500 [2024-11-26 19:36:09.311704] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:35.500 [2024-11-26 19:36:09.312857] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:29:35.500 [2024-11-26 19:36:09.312907] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.760 [2024-11-26 19:36:09.405126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:35.760 [2024-11-26 19:36:09.456806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.760 [2024-11-26 19:36:09.456857] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.760 [2024-11-26 19:36:09.456865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.760 [2024-11-26 19:36:09.456873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.760 [2024-11-26 19:36:09.456880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.760 [2024-11-26 19:36:09.458517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.760 [2024-11-26 19:36:09.458522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.760 [2024-11-26 19:36:09.536758] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:35.760 [2024-11-26 19:36:09.537070] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:35.760 [2024-11-26 19:36:09.537163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:36.327 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.327 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:29:36.327 19:36:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:36.327 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:29:36.328 5000+0 records in 00:29:36.328 5000+0 records out 00:29:36.328 10240000 bytes (10 MB, 9.8 MiB) copied, 0.00760269 s, 1.3 GB/s 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:36.328 AIO0 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:36.328 [2024-11-26 19:36:10.187192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.328 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:36.588 [2024-11-26 19:36:10.211651] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4003967 0 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4003967 0 idle 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4003967 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4003967 -w 256 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4003967 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.28 reactor_0' 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4003967 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.28 reactor_0 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 4003967 1 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4003967 1 idle 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4003967 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4003967 -w 256 00:29:36.588 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4003971 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4003971 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=4004556 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4003967 0 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4003967 0 busy 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4003967 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4003967 -w 256 00:29:36.848 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4003967 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.49 reactor_0' 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4003967 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.49 reactor_0 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 4003967 1 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 4003967 1 busy 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4003967 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:37.108 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4003967 -w 256 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4003971 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1' 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4003971 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:37.109 19:36:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 4004556 00:29:47.099 Initializing NVMe Controllers 00:29:47.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.099 Controller IO queue size 256, less than required. 00:29:47.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:47.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:47.099 Initialization complete. Launching workers. 00:29:47.099 ======================================================== 00:29:47.099 Latency(us) 00:29:47.099 Device Information : IOPS MiB/s Average min max 00:29:47.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 20561.40 80.32 12454.93 3696.32 20890.13 00:29:47.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 21230.80 82.93 12062.45 3711.79 20638.50 00:29:47.099 ======================================================== 00:29:47.099 Total : 41792.20 163.25 12255.55 3696.32 20890.13 00:29:47.099 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4003967 0 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4003967 0 idle 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4003967 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4003967 -w 256 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4003967 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.27 reactor_0' 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4003967 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.27 reactor_0 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 4003967 1 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4003967 1 idle 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4003967 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4003967 -w 256 00:29:47.099 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4003971 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4003971 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:47.360 19:36:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:47.619 19:36:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:29:47.619 19:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:29:47.619 19:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:29:47.619 19:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:29:47.619 19:36:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:29:50.154 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:29:50.154 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:29:50.154 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:29:50.154 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:29:50.154 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:29:50.154 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:29:50.154 19:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:29:50.154 19:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4003967 0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4003967 0 idle 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4003967 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4003967 -w 256 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4003967 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.43 reactor_0' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4003967 root 20 0 128.2g 79488 32256 S 6.2 0.1 0:20.43 reactor_0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 4003967 1 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 4003967 1 idle 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=4003967 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 4003967 -w 256 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='4003971 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.05 reactor_1' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 4003971 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.05 reactor_1 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:50.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.155 rmmod nvme_tcp 00:29:50.155 rmmod nvme_fabrics 00:29:50.155 rmmod nvme_keyring 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 4003967 ']' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 4003967 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 4003967 ']' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 4003967 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.155 19:36:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4003967 00:29:50.155 19:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:50.155 19:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:50.155 19:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4003967' 00:29:50.155 killing process with pid 4003967 00:29:50.155 19:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 4003967 00:29:50.155 19:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 4003967 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:50.414 19:36:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.322 19:36:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.322 00:29:52.322 real 0m22.375s 00:29:52.322 user 0m39.710s 00:29:52.322 sys 0m7.182s 00:29:52.322 19:36:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.322 19:36:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:52.322 ************************************ 00:29:52.322 END TEST nvmf_interrupt 00:29:52.322 ************************************ 00:29:52.678 00:29:52.678 real 26m19.917s 00:29:52.678 user 56m52.163s 00:29:52.678 sys 8m0.385s 00:29:52.678 19:36:26 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:52.678 19:36:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.678 ************************************ 00:29:52.678 END TEST nvmf_tcp 00:29:52.678 ************************************ 00:29:52.678 19:36:26 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:29:52.678 19:36:26 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:52.678 19:36:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:52.678 19:36:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:52.678 19:36:26 -- common/autotest_common.sh@10 -- # set +x 00:29:52.678 ************************************ 00:29:52.678 START TEST spdkcli_nvmf_tcp 00:29:52.678 ************************************ 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:52.678 * Looking for test storage... 00:29:52.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:52.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.678 --rc genhtml_branch_coverage=1 00:29:52.678 --rc genhtml_function_coverage=1 00:29:52.678 --rc genhtml_legend=1 00:29:52.678 --rc geninfo_all_blocks=1 00:29:52.678 --rc geninfo_unexecuted_blocks=1 00:29:52.678 00:29:52.678 ' 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:52.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.678 --rc genhtml_branch_coverage=1 00:29:52.678 --rc genhtml_function_coverage=1 00:29:52.678 --rc genhtml_legend=1 00:29:52.678 --rc geninfo_all_blocks=1 00:29:52.678 --rc geninfo_unexecuted_blocks=1 00:29:52.678 00:29:52.678 ' 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:52.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.678 --rc genhtml_branch_coverage=1 00:29:52.678 --rc genhtml_function_coverage=1 00:29:52.678 --rc genhtml_legend=1 00:29:52.678 --rc geninfo_all_blocks=1 00:29:52.678 --rc geninfo_unexecuted_blocks=1 00:29:52.678 00:29:52.678 ' 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:52.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.678 --rc genhtml_branch_coverage=1 00:29:52.678 --rc genhtml_function_coverage=1 00:29:52.678 --rc genhtml_legend=1 00:29:52.678 --rc geninfo_all_blocks=1 00:29:52.678 --rc geninfo_unexecuted_blocks=1 00:29:52.678 00:29:52.678 ' 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.678 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:52.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=4008273 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 4008273 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 4008273 ']' 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.679 19:36:26 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:52.679 [2024-11-26 19:36:26.428696] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:29:52.679 [2024-11-26 19:36:26.428747] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4008273 ] 00:29:52.679 [2024-11-26 19:36:26.493027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:53.025 [2024-11-26 19:36:26.524557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.025 [2024-11-26 19:36:26.524557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:53.025 19:36:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:53.025 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:53.025 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:53.025 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:53.025 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:53.025 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:53.025 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:53.025 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:53.025 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:53.025 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:53.025 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:53.025 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:53.025 ' 00:29:55.561 [2024-11-26 19:36:29.029049] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.499 [2024-11-26 19:36:30.273187] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:59.037 [2024-11-26 19:36:32.571564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:00.942 [2024-11-26 19:36:34.541137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:02.319 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:02.319 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:02.319 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:02.319 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:02.319 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:02.319 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:02.319 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:02.319 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:02.319 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:02.319 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:02.319 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:02.319 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:02.319 19:36:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:02.319 19:36:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.319 19:36:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:02.579 19:36:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:02.579 19:36:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.579 19:36:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:02.579 19:36:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:02.579 19:36:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:02.838 19:36:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:02.838 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:02.838 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:02.838 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:02.838 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:02.838 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:02.838 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:02.838 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:02.838 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:02.838 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:02.838 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:02.838 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:02.838 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:02.838 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:02.838 ' 00:30:08.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:08.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:08.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:08.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:08.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:08.113 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:08.113 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:08.113 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:08.113 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:08.113 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:08.113 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:08.113 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:08.113 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:08.113 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 4008273 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4008273 ']' 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4008273 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4008273 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4008273' 00:30:08.113 killing process with pid 4008273 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 4008273 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 4008273 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 4008273 ']' 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 4008273 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 4008273 ']' 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 4008273 00:30:08.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4008273) - No such process 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 4008273 is not found' 00:30:08.113 Process with pid 4008273 is not found 00:30:08.113 19:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:08.114 19:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:08.114 19:36:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:08.114 00:30:08.114 real 0m15.673s 00:30:08.114 user 0m33.427s 00:30:08.114 sys 0m0.561s 00:30:08.114 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.114 19:36:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.114 ************************************ 00:30:08.114 END TEST spdkcli_nvmf_tcp 00:30:08.114 ************************************ 00:30:08.114 19:36:41 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:08.114 19:36:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.114 19:36:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.114 19:36:41 -- common/autotest_common.sh@10 -- # set +x 00:30:08.373 ************************************ 00:30:08.373 START TEST nvmf_identify_passthru 00:30:08.373 ************************************ 00:30:08.373 19:36:41 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:08.373 * Looking for test storage... 00:30:08.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:08.373 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:08.373 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:30:08.374 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:08.374 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:30:08.374 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.374 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:08.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.374 --rc genhtml_branch_coverage=1 00:30:08.374 --rc genhtml_function_coverage=1 00:30:08.374 --rc genhtml_legend=1 00:30:08.374 --rc geninfo_all_blocks=1 00:30:08.374 --rc geninfo_unexecuted_blocks=1 00:30:08.374 00:30:08.374 ' 00:30:08.374 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:08.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.374 --rc genhtml_branch_coverage=1 00:30:08.374 --rc genhtml_function_coverage=1 00:30:08.374 --rc genhtml_legend=1 00:30:08.374 --rc geninfo_all_blocks=1 00:30:08.374 --rc geninfo_unexecuted_blocks=1 00:30:08.374 00:30:08.374 ' 00:30:08.374 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:08.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.374 --rc genhtml_branch_coverage=1 00:30:08.374 --rc genhtml_function_coverage=1 00:30:08.374 --rc genhtml_legend=1 00:30:08.374 --rc geninfo_all_blocks=1 00:30:08.374 --rc geninfo_unexecuted_blocks=1 00:30:08.374 00:30:08.374 ' 00:30:08.374 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:08.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.374 --rc genhtml_branch_coverage=1 00:30:08.374 --rc genhtml_function_coverage=1 00:30:08.374 --rc genhtml_legend=1 00:30:08.374 --rc geninfo_all_blocks=1 00:30:08.374 --rc geninfo_unexecuted_blocks=1 00:30:08.374 00:30:08.374 ' 00:30:08.374 19:36:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.374 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.374 19:36:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.374 19:36:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.374 19:36:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.374 19:36:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.374 19:36:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:08.375 19:36:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.375 19:36:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.375 19:36:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.375 19:36:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.375 19:36:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.375 19:36:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.375 19:36:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.375 19:36:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.375 19:36:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.375 19:36:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:08.375 19:36:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.375 19:36:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.375 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:08.375 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.375 19:36:42 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.375 19:36:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:13.654 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:13.654 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:13.654 Found net devices under 0000:31:00.0: cvl_0_0 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:13.654 Found net devices under 0000:31:00.1: cvl_0_1 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:13.654 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:13.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:13.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:30:13.655 00:30:13.655 --- 10.0.0.2 ping statistics --- 00:30:13.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.655 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:13.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:13.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:30:13.655 00:30:13.655 --- 10.0.0.1 ping statistics --- 00:30:13.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:13.655 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:13.655 19:36:47 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:13.655 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.655 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:30:13.655 19:36:47 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:30:13.655 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:13.655 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:13.655 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:13.655 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:13.655 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:14.251 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605499 00:30:14.251 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:14.251 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:14.251 19:36:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:14.816 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:14.816 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.816 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.816 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=4015685 00:30:14.816 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:14.816 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 4015685 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 4015685 ']' 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.816 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.816 [2024-11-26 19:36:48.451775] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:30:14.816 [2024-11-26 19:36:48.451824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.816 [2024-11-26 19:36:48.521712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.816 [2024-11-26 19:36:48.552116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.816 [2024-11-26 19:36:48.552145] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.816 [2024-11-26 19:36:48.552151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.816 [2024-11-26 19:36:48.552156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.816 [2024-11-26 19:36:48.552161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.816 [2024-11-26 19:36:48.553627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.816 [2024-11-26 19:36:48.553780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.816 [2024-11-26 19:36:48.553930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.816 [2024-11-26 19:36:48.553932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:30:14.816 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.816 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.816 INFO: Log level set to 20 00:30:14.816 INFO: Requests: 00:30:14.816 { 00:30:14.816 "jsonrpc": "2.0", 00:30:14.816 "method": "nvmf_set_config", 00:30:14.816 "id": 1, 00:30:14.816 "params": { 00:30:14.816 "admin_cmd_passthru": { 00:30:14.816 "identify_ctrlr": true 00:30:14.816 } 00:30:14.816 } 00:30:14.816 } 00:30:14.816 00:30:14.816 INFO: response: 00:30:14.816 { 00:30:14.816 "jsonrpc": "2.0", 00:30:14.816 "id": 1, 00:30:14.816 "result": true 00:30:14.816 } 00:30:14.816 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.817 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.817 INFO: Setting log level to 20 00:30:14.817 INFO: Setting log level to 20 00:30:14.817 INFO: Log level set to 20 00:30:14.817 INFO: Log level set to 20 00:30:14.817 INFO: Requests: 00:30:14.817 { 00:30:14.817 "jsonrpc": "2.0", 00:30:14.817 "method": "framework_start_init", 00:30:14.817 "id": 1 00:30:14.817 } 00:30:14.817 00:30:14.817 INFO: Requests: 00:30:14.817 { 00:30:14.817 "jsonrpc": "2.0", 00:30:14.817 "method": "framework_start_init", 00:30:14.817 "id": 1 00:30:14.817 } 00:30:14.817 00:30:14.817 [2024-11-26 19:36:48.637449] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:14.817 INFO: response: 00:30:14.817 { 00:30:14.817 "jsonrpc": "2.0", 00:30:14.817 "id": 1, 00:30:14.817 "result": true 00:30:14.817 } 00:30:14.817 00:30:14.817 INFO: response: 00:30:14.817 { 00:30:14.817 "jsonrpc": "2.0", 00:30:14.817 "id": 1, 00:30:14.817 "result": true 00:30:14.817 } 00:30:14.817 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.817 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:14.817 INFO: Setting log level to 40 00:30:14.817 INFO: Setting log level to 40 00:30:14.817 INFO: Setting log level to 40 00:30:14.817 [2024-11-26 19:36:48.646483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.817 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.817 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.075 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:15.075 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.075 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.333 Nvme0n1 00:30:15.333 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.333 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:15.333 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.333 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.333 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.333 19:36:48 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:15.333 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.333 19:36:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.333 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.333 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.333 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.333 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.333 [2024-11-26 19:36:49.006608] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.333 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.333 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:15.333 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.333 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.333 [ 00:30:15.333 { 00:30:15.333 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:15.333 "subtype": "Discovery", 00:30:15.333 "listen_addresses": [], 00:30:15.333 "allow_any_host": true, 00:30:15.333 "hosts": [] 00:30:15.333 }, 00:30:15.333 { 00:30:15.333 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.333 "subtype": "NVMe", 00:30:15.333 "listen_addresses": [ 00:30:15.333 { 00:30:15.333 "trtype": "TCP", 00:30:15.333 "adrfam": "IPv4", 00:30:15.333 "traddr": "10.0.0.2", 00:30:15.333 "trsvcid": "4420" 00:30:15.333 } 00:30:15.333 ], 00:30:15.333 "allow_any_host": true, 00:30:15.333 "hosts": [], 00:30:15.333 "serial_number": "SPDK00000000000001", 00:30:15.333 "model_number": "SPDK bdev Controller", 00:30:15.333 "max_namespaces": 1, 00:30:15.333 "min_cntlid": 1, 00:30:15.333 "max_cntlid": 65519, 00:30:15.333 "namespaces": [ 00:30:15.333 { 00:30:15.333 "nsid": 1, 00:30:15.333 "bdev_name": "Nvme0n1", 00:30:15.333 "name": "Nvme0n1", 00:30:15.333 "nguid": "363447305260549900253845000000A3", 00:30:15.333 "uuid": "36344730-5260-5499-0025-3845000000a3" 00:30:15.333 } 00:30:15.333 ] 00:30:15.333 } 00:30:15.333 ] 00:30:15.333 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.333 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:15.333 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:15.333 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605499 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605499 '!=' S64GNE0R605499 ']' 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.592 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.592 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:15.592 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:15.592 19:36:49 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:15.592 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:15.592 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:15.592 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:15.592 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:15.592 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:15.592 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:15.593 rmmod nvme_tcp 00:30:15.593 rmmod nvme_fabrics 00:30:15.593 rmmod nvme_keyring 00:30:15.593 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:15.593 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:15.593 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:15.593 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 4015685 ']' 00:30:15.593 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 4015685 00:30:15.593 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 4015685 ']' 00:30:15.593 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 4015685 00:30:15.593 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:30:15.593 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.593 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4015685 00:30:15.851 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:15.851 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:15.851 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4015685' 00:30:15.851 killing process with pid 4015685 00:30:15.851 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 4015685 00:30:15.851 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 4015685 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:15.851 19:36:49 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.851 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:15.851 19:36:49 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.387 19:36:51 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:18.387 00:30:18.387 real 0m9.761s 00:30:18.388 user 0m5.867s 00:30:18.388 sys 0m4.725s 00:30:18.388 19:36:51 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.388 19:36:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:18.388 ************************************ 00:30:18.388 END TEST nvmf_identify_passthru 00:30:18.388 ************************************ 00:30:18.388 19:36:51 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:18.388 19:36:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:18.388 19:36:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.388 19:36:51 -- common/autotest_common.sh@10 -- # set +x 00:30:18.388 ************************************ 00:30:18.388 START TEST nvmf_dif 00:30:18.388 ************************************ 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:18.388 * Looking for test storage... 00:30:18.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:18.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.388 --rc genhtml_branch_coverage=1 00:30:18.388 --rc genhtml_function_coverage=1 00:30:18.388 --rc genhtml_legend=1 00:30:18.388 --rc geninfo_all_blocks=1 00:30:18.388 --rc geninfo_unexecuted_blocks=1 00:30:18.388 00:30:18.388 ' 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:18.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.388 --rc genhtml_branch_coverage=1 00:30:18.388 --rc genhtml_function_coverage=1 00:30:18.388 --rc genhtml_legend=1 00:30:18.388 --rc geninfo_all_blocks=1 00:30:18.388 --rc geninfo_unexecuted_blocks=1 00:30:18.388 00:30:18.388 ' 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:18.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.388 --rc genhtml_branch_coverage=1 00:30:18.388 --rc genhtml_function_coverage=1 00:30:18.388 --rc genhtml_legend=1 00:30:18.388 --rc geninfo_all_blocks=1 00:30:18.388 --rc geninfo_unexecuted_blocks=1 00:30:18.388 00:30:18.388 ' 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:18.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.388 --rc genhtml_branch_coverage=1 00:30:18.388 --rc genhtml_function_coverage=1 00:30:18.388 --rc genhtml_legend=1 00:30:18.388 --rc geninfo_all_blocks=1 00:30:18.388 --rc geninfo_unexecuted_blocks=1 00:30:18.388 00:30:18.388 ' 00:30:18.388 19:36:51 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.388 19:36:51 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.388 19:36:51 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.388 19:36:51 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.388 19:36:51 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.388 19:36:51 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:18.388 19:36:51 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:18.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.388 19:36:51 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:18.388 19:36:51 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:18.388 19:36:51 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:18.388 19:36:51 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:18.388 19:36:51 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:18.388 19:36:51 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:18.388 19:36:51 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.389 19:36:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:23.666 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:23.666 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:23.666 Found net devices under 0000:31:00.0: cvl_0_0 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:23.666 Found net devices under 0000:31:00.1: cvl_0_1 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.666 19:36:57 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:30:23.667 00:30:23.667 --- 10.0.0.2 ping statistics --- 00:30:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.667 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:30:23.667 00:30:23.667 --- 10.0.0.1 ping statistics --- 00:30:23.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.667 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:30:23.667 19:36:57 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:26.209 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:26.209 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.209 19:36:59 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:26.209 19:36:59 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.209 19:36:59 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:26.209 19:36:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=4021663 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 4021663 00:30:26.209 19:36:59 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 4021663 ']' 00:30:26.209 19:36:59 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.209 19:36:59 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.209 19:36:59 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.209 19:36:59 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.209 19:36:59 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:26.209 19:36:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:26.209 [2024-11-26 19:36:59.982792] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:30:26.209 [2024-11-26 19:36:59.982843] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.209 [2024-11-26 19:37:00.068085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.467 [2024-11-26 19:37:00.103767] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.467 [2024-11-26 19:37:00.103802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.467 [2024-11-26 19:37:00.103810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.467 [2024-11-26 19:37:00.103817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.467 [2024-11-26 19:37:00.103822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.467 [2024-11-26 19:37:00.104430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:30:27.035 19:37:00 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:27.035 19:37:00 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.035 19:37:00 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:27.035 19:37:00 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:27.035 [2024-11-26 19:37:00.790341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.035 19:37:00 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.035 19:37:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:27.035 ************************************ 00:30:27.035 START TEST fio_dif_1_default 00:30:27.035 ************************************ 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:27.035 bdev_null0 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:27.035 [2024-11-26 19:37:00.846611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:27.035 { 00:30:27.035 "params": { 00:30:27.035 "name": "Nvme$subsystem", 00:30:27.035 "trtype": "$TEST_TRANSPORT", 00:30:27.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:27.035 "adrfam": "ipv4", 00:30:27.035 "trsvcid": "$NVMF_PORT", 00:30:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:27.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:27.035 "hdgst": ${hdgst:-false}, 00:30:27.035 "ddgst": ${ddgst:-false} 00:30:27.035 }, 00:30:27.035 "method": "bdev_nvme_attach_controller" 00:30:27.035 } 00:30:27.035 EOF 00:30:27.035 )") 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:27.035 "params": { 00:30:27.035 "name": "Nvme0", 00:30:27.035 "trtype": "tcp", 00:30:27.035 "traddr": "10.0.0.2", 00:30:27.035 "adrfam": "ipv4", 00:30:27.035 "trsvcid": "4420", 00:30:27.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:27.035 "hdgst": false, 00:30:27.035 "ddgst": false 00:30:27.035 }, 00:30:27.035 "method": "bdev_nvme_attach_controller" 00:30:27.035 }' 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:27.035 19:37:00 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:27.625 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:27.625 fio-3.35 00:30:27.625 Starting 1 thread 00:30:39.833 00:30:39.833 filename0: (groupid=0, jobs=1): err= 0: pid=4022191: Tue Nov 26 19:37:11 2024 00:30:39.833 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10011msec) 00:30:39.833 slat (nsec): min=4411, max=17379, avg=5912.30, stdev=935.01 00:30:39.833 clat (usec): min=573, max=43740, avg=21059.17, stdev=20174.58 00:30:39.833 lat (usec): min=578, max=43757, avg=21065.09, stdev=20174.57 00:30:39.833 clat percentiles (usec): 00:30:39.833 | 1.00th=[ 603], 5.00th=[ 799], 10.00th=[ 824], 20.00th=[ 848], 00:30:39.833 | 30.00th=[ 865], 40.00th=[ 881], 50.00th=[41157], 60.00th=[41157], 00:30:39.833 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:39.833 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:30:39.833 | 99.99th=[43779] 00:30:39.833 bw ( KiB/s): min= 704, max= 768, per=99.85%, avg=758.40, stdev=23.45, samples=20 00:30:39.833 iops : min= 176, max= 192, avg=189.60, stdev= 5.86, samples=20 00:30:39.833 lat (usec) : 750=2.11%, 1000=47.37% 00:30:39.833 lat (msec) : 2=0.42%, 50=50.11% 00:30:39.833 cpu : usr=93.67%, sys=6.12%, ctx=14, majf=0, minf=235 00:30:39.833 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:39.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:39.833 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:39.833 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:39.833 00:30:39.833 Run status group 0 (all jobs): 00:30:39.833 READ: bw=759KiB/s (777kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=7600KiB (7782kB), run=10011-10011msec 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.833 00:30:39.833 real 0m11.015s 00:30:39.833 user 0m22.257s 00:30:39.833 sys 0m0.918s 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 ************************************ 00:30:39.833 END TEST fio_dif_1_default 00:30:39.833 ************************************ 00:30:39.833 19:37:11 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:39.833 19:37:11 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:39.833 19:37:11 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 ************************************ 00:30:39.833 START TEST fio_dif_1_multi_subsystems 00:30:39.833 ************************************ 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 bdev_null0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 [2024-11-26 19:37:11.909185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 bdev_null1 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.833 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:39.834 { 00:30:39.834 "params": { 00:30:39.834 "name": "Nvme$subsystem", 00:30:39.834 "trtype": "$TEST_TRANSPORT", 00:30:39.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:39.834 "adrfam": "ipv4", 00:30:39.834 "trsvcid": "$NVMF_PORT", 00:30:39.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:39.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:39.834 "hdgst": ${hdgst:-false}, 00:30:39.834 "ddgst": ${ddgst:-false} 00:30:39.834 }, 00:30:39.834 "method": "bdev_nvme_attach_controller" 00:30:39.834 } 00:30:39.834 EOF 00:30:39.834 )") 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:39.834 { 00:30:39.834 "params": { 00:30:39.834 "name": "Nvme$subsystem", 00:30:39.834 "trtype": "$TEST_TRANSPORT", 00:30:39.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:39.834 "adrfam": "ipv4", 00:30:39.834 "trsvcid": "$NVMF_PORT", 00:30:39.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:39.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:39.834 "hdgst": ${hdgst:-false}, 00:30:39.834 "ddgst": ${ddgst:-false} 00:30:39.834 }, 00:30:39.834 "method": "bdev_nvme_attach_controller" 00:30:39.834 } 00:30:39.834 EOF 00:30:39.834 )") 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:39.834 "params": { 00:30:39.834 "name": "Nvme0", 00:30:39.834 "trtype": "tcp", 00:30:39.834 "traddr": "10.0.0.2", 00:30:39.834 "adrfam": "ipv4", 00:30:39.834 "trsvcid": "4420", 00:30:39.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:39.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:39.834 "hdgst": false, 00:30:39.834 "ddgst": false 00:30:39.834 }, 00:30:39.834 "method": "bdev_nvme_attach_controller" 00:30:39.834 },{ 00:30:39.834 "params": { 00:30:39.834 "name": "Nvme1", 00:30:39.834 "trtype": "tcp", 00:30:39.834 "traddr": "10.0.0.2", 00:30:39.834 "adrfam": "ipv4", 00:30:39.834 "trsvcid": "4420", 00:30:39.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:39.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:39.834 "hdgst": false, 00:30:39.834 "ddgst": false 00:30:39.834 }, 00:30:39.834 "method": "bdev_nvme_attach_controller" 00:30:39.834 }' 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:39.834 19:37:11 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:39.834 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:39.834 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:39.834 fio-3.35 00:30:39.834 Starting 2 threads 00:30:49.809 00:30:49.809 filename0: (groupid=0, jobs=1): err= 0: pid=4025036: Tue Nov 26 19:37:23 2024 00:30:49.809 read: IOPS=190, BW=763KiB/s (782kB/s)(7664KiB/10041msec) 00:30:49.809 slat (nsec): min=4328, max=12490, avg=5814.59, stdev=562.04 00:30:49.809 clat (usec): min=430, max=41632, avg=20945.04, stdev=20149.24 00:30:49.809 lat (usec): min=436, max=41641, avg=20950.86, stdev=20149.19 00:30:49.809 clat percentiles (usec): 00:30:49.809 | 1.00th=[ 627], 5.00th=[ 807], 10.00th=[ 824], 20.00th=[ 840], 00:30:49.809 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 1631], 60.00th=[41157], 00:30:49.809 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:49.809 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:30:49.809 | 99.99th=[41681] 00:30:49.809 bw ( KiB/s): min= 704, max= 768, per=66.68%, avg=764.80, stdev=14.31, samples=20 00:30:49.809 iops : min= 176, max= 192, avg=191.20, stdev= 3.58, samples=20 00:30:49.809 lat (usec) : 500=0.21%, 750=1.67%, 1000=48.02% 00:30:49.809 lat (msec) : 2=0.21%, 50=49.90% 00:30:49.809 cpu : usr=95.82%, sys=3.95%, ctx=26, majf=0, minf=101 00:30:49.809 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.809 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.809 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:49.809 filename1: (groupid=0, jobs=1): err= 0: pid=4025037: Tue Nov 26 19:37:23 2024 00:30:49.809 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10031msec) 00:30:49.809 slat (nsec): min=4265, max=17204, avg=5847.02, stdev=814.06 00:30:49.809 clat (usec): min=1330, max=42072, avg=41775.92, stdev=2624.76 00:30:49.809 lat (usec): min=1334, max=42078, avg=41781.77, stdev=2624.67 00:30:49.809 clat percentiles (usec): 00:30:49.809 | 1.00th=[41157], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:30:49.809 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:49.809 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:49.809 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:30:49.809 | 99.99th=[42206] 00:30:49.809 bw ( KiB/s): min= 352, max= 416, per=33.34%, avg=382.40, stdev=12.61, samples=20 00:30:49.809 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:30:49.809 lat (msec) : 2=0.42%, 50=99.58% 00:30:49.809 cpu : usr=95.52%, sys=4.28%, ctx=8, majf=0, minf=145 00:30:49.809 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:49.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:49.809 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:49.809 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:49.809 00:30:49.809 Run status group 0 (all jobs): 00:30:49.809 READ: bw=1146KiB/s (1173kB/s), 383KiB/s-763KiB/s (392kB/s-782kB/s), io=11.2MiB (11.8MB), run=10031-10041msec 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.809 00:30:49.809 real 0m11.493s 00:30:49.809 user 0m32.760s 00:30:49.809 sys 0m1.122s 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 ************************************ 00:30:49.809 END TEST fio_dif_1_multi_subsystems 00:30:49.809 ************************************ 00:30:49.809 19:37:23 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:49.809 19:37:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:49.809 19:37:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 ************************************ 00:30:49.809 START TEST fio_dif_rand_params 00:30:49.809 ************************************ 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 bdev_null0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:49.809 [2024-11-26 19:37:23.447151] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:49.809 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:49.810 { 00:30:49.810 "params": { 00:30:49.810 "name": "Nvme$subsystem", 00:30:49.810 "trtype": "$TEST_TRANSPORT", 00:30:49.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.810 "adrfam": "ipv4", 00:30:49.810 "trsvcid": "$NVMF_PORT", 00:30:49.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.810 "hdgst": ${hdgst:-false}, 00:30:49.810 "ddgst": ${ddgst:-false} 00:30:49.810 }, 00:30:49.810 "method": "bdev_nvme_attach_controller" 00:30:49.810 } 00:30:49.810 EOF 00:30:49.810 )") 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:49.810 "params": { 00:30:49.810 "name": "Nvme0", 00:30:49.810 "trtype": "tcp", 00:30:49.810 "traddr": "10.0.0.2", 00:30:49.810 "adrfam": "ipv4", 00:30:49.810 "trsvcid": "4420", 00:30:49.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:49.810 "hdgst": false, 00:30:49.810 "ddgst": false 00:30:49.810 }, 00:30:49.810 "method": "bdev_nvme_attach_controller" 00:30:49.810 }' 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:49.810 19:37:23 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:50.069 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:50.069 ... 00:30:50.069 fio-3.35 00:30:50.069 Starting 3 threads 00:30:56.669 00:30:56.669 filename0: (groupid=0, jobs=1): err= 0: pid=4027544: Tue Nov 26 19:37:29 2024 00:30:56.669 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(188MiB/5048msec) 00:30:56.669 slat (nsec): min=4302, max=38110, avg=7243.27, stdev=1997.18 00:30:56.669 clat (usec): min=4421, max=49716, avg=10016.69, stdev=2815.21 00:30:56.669 lat (usec): min=4429, max=49724, avg=10023.93, stdev=2815.06 00:30:56.669 clat percentiles (usec): 00:30:56.669 | 1.00th=[ 5080], 5.00th=[ 7111], 10.00th=[ 7832], 20.00th=[ 8455], 00:30:56.669 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10552], 00:30:56.669 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11863], 95.00th=[12256], 00:30:56.669 | 99.00th=[13566], 99.50th=[15008], 99.90th=[49546], 99.95th=[49546], 00:30:56.669 | 99.99th=[49546] 00:30:56.669 bw ( KiB/s): min=35072, max=43008, per=31.21%, avg=38502.40, stdev=2699.02, samples=10 00:30:56.669 iops : min= 274, max= 336, avg=300.80, stdev=21.09, samples=10 00:30:56.669 lat (msec) : 10=48.21%, 20=51.46%, 50=0.33% 00:30:56.669 cpu : usr=95.13%, sys=3.90%, ctx=308, majf=0, minf=100 00:30:56.669 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.669 issued rwts: total=1506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.669 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:56.669 filename0: (groupid=0, jobs=1): err= 0: pid=4027545: Tue Nov 26 19:37:29 2024 00:30:56.669 read: IOPS=335, BW=41.9MiB/s (43.9MB/s)(211MiB/5045msec) 00:30:56.669 slat (nsec): min=4290, max=37963, avg=6544.70, stdev=1170.30 00:30:56.669 clat (usec): min=4582, max=52848, avg=8915.91, stdev=6430.95 00:30:56.669 lat (usec): min=4588, max=52861, avg=8922.45, stdev=6430.99 00:30:56.669 clat percentiles (usec): 00:30:56.669 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7111], 00:30:56.669 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8160], 00:30:56.669 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9372], 95.00th=[10028], 00:30:56.669 | 99.00th=[49021], 99.50th=[50070], 99.90th=[52691], 99.95th=[52691], 00:30:56.669 | 99.99th=[52691] 00:30:56.669 bw ( KiB/s): min=35142, max=50944, per=35.06%, avg=43245.40, stdev=5658.78, samples=10 00:30:56.669 iops : min= 274, max= 398, avg=337.80, stdev=44.30, samples=10 00:30:56.669 lat (msec) : 10=95.09%, 20=2.48%, 50=1.77%, 100=0.65% 00:30:56.669 cpu : usr=96.93%, sys=2.84%, ctx=6, majf=0, minf=80 00:30:56.669 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.669 issued rwts: total=1691,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.669 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:56.669 filename0: (groupid=0, jobs=1): err= 0: pid=4027546: Tue Nov 26 19:37:29 2024 00:30:56.669 read: IOPS=330, BW=41.3MiB/s (43.3MB/s)(209MiB/5046msec) 00:30:56.669 slat (nsec): min=4323, max=37039, avg=6531.72, stdev=1257.77 00:30:56.670 clat (usec): min=3434, max=49905, avg=9041.14, stdev=4271.95 00:30:56.670 lat (usec): min=3440, max=49911, avg=9047.67, stdev=4272.10 00:30:56.670 clat percentiles (usec): 00:30:56.670 | 1.00th=[ 4555], 5.00th=[ 5604], 10.00th=[ 6390], 20.00th=[ 7439], 00:30:56.670 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:30:56.670 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10945], 95.00th=[11469], 00:30:56.670 | 99.00th=[45351], 99.50th=[48497], 99.90th=[49546], 99.95th=[50070], 00:30:56.670 | 99.99th=[50070] 00:30:56.670 bw ( KiB/s): min=35072, max=53248, per=34.57%, avg=42649.60, stdev=5402.62, samples=10 00:30:56.670 iops : min= 274, max= 416, avg=333.20, stdev=42.21, samples=10 00:30:56.670 lat (msec) : 4=0.84%, 10=77.52%, 20=20.62%, 50=1.02% 00:30:56.670 cpu : usr=96.63%, sys=3.13%, ctx=5, majf=0, minf=87 00:30:56.670 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:56.670 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.670 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:56.670 issued rwts: total=1668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:56.670 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:56.670 00:30:56.670 Run status group 0 (all jobs): 00:30:56.670 READ: bw=120MiB/s (126MB/s), 37.3MiB/s-41.9MiB/s (39.1MB/s-43.9MB/s), io=608MiB (638MB), run=5045-5048msec 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 bdev_null0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 [2024-11-26 19:37:29.453251] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 bdev_null1 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 bdev_null2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:30:56.670 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:56.671 { 00:30:56.671 "params": { 00:30:56.671 "name": "Nvme$subsystem", 00:30:56.671 "trtype": "$TEST_TRANSPORT", 00:30:56.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.671 "adrfam": "ipv4", 00:30:56.671 "trsvcid": "$NVMF_PORT", 00:30:56.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.671 "hdgst": ${hdgst:-false}, 00:30:56.671 "ddgst": ${ddgst:-false} 00:30:56.671 }, 00:30:56.671 "method": "bdev_nvme_attach_controller" 00:30:56.671 } 00:30:56.671 EOF 00:30:56.671 )") 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:56.671 { 00:30:56.671 "params": { 00:30:56.671 "name": "Nvme$subsystem", 00:30:56.671 "trtype": "$TEST_TRANSPORT", 00:30:56.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.671 "adrfam": "ipv4", 00:30:56.671 "trsvcid": "$NVMF_PORT", 00:30:56.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.671 "hdgst": ${hdgst:-false}, 00:30:56.671 "ddgst": ${ddgst:-false} 00:30:56.671 }, 00:30:56.671 "method": "bdev_nvme_attach_controller" 00:30:56.671 } 00:30:56.671 EOF 00:30:56.671 )") 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:56.671 { 00:30:56.671 "params": { 00:30:56.671 "name": "Nvme$subsystem", 00:30:56.671 "trtype": "$TEST_TRANSPORT", 00:30:56.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:56.671 "adrfam": "ipv4", 00:30:56.671 "trsvcid": "$NVMF_PORT", 00:30:56.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:56.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:56.671 "hdgst": ${hdgst:-false}, 00:30:56.671 "ddgst": ${ddgst:-false} 00:30:56.671 }, 00:30:56.671 "method": "bdev_nvme_attach_controller" 00:30:56.671 } 00:30:56.671 EOF 00:30:56.671 )") 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:56.671 "params": { 00:30:56.671 "name": "Nvme0", 00:30:56.671 "trtype": "tcp", 00:30:56.671 "traddr": "10.0.0.2", 00:30:56.671 "adrfam": "ipv4", 00:30:56.671 "trsvcid": "4420", 00:30:56.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:56.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:56.671 "hdgst": false, 00:30:56.671 "ddgst": false 00:30:56.671 }, 00:30:56.671 "method": "bdev_nvme_attach_controller" 00:30:56.671 },{ 00:30:56.671 "params": { 00:30:56.671 "name": "Nvme1", 00:30:56.671 "trtype": "tcp", 00:30:56.671 "traddr": "10.0.0.2", 00:30:56.671 "adrfam": "ipv4", 00:30:56.671 "trsvcid": "4420", 00:30:56.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:56.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:56.671 "hdgst": false, 00:30:56.671 "ddgst": false 00:30:56.671 }, 00:30:56.671 "method": "bdev_nvme_attach_controller" 00:30:56.671 },{ 00:30:56.671 "params": { 00:30:56.671 "name": "Nvme2", 00:30:56.671 "trtype": "tcp", 00:30:56.671 "traddr": "10.0.0.2", 00:30:56.671 "adrfam": "ipv4", 00:30:56.671 "trsvcid": "4420", 00:30:56.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:56.671 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:56.671 "hdgst": false, 00:30:56.671 "ddgst": false 00:30:56.671 }, 00:30:56.671 "method": "bdev_nvme_attach_controller" 00:30:56.671 }' 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:56.671 19:37:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:56.671 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:56.671 ... 00:30:56.671 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:56.671 ... 00:30:56.671 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:56.671 ... 00:30:56.671 fio-3.35 00:30:56.671 Starting 24 threads 00:31:08.887 00:31:08.887 filename0: (groupid=0, jobs=1): err= 0: pid=4029050: Tue Nov 26 19:37:41 2024 00:31:08.887 read: IOPS=728, BW=2915KiB/s (2985kB/s)(28.5MiB/10022msec) 00:31:08.887 slat (usec): min=4, max=563, avg= 8.08, stdev= 7.10 00:31:08.887 clat (usec): min=638, max=36945, avg=21886.00, stdev=4748.86 00:31:08.887 lat (usec): min=646, max=36951, avg=21894.08, stdev=4749.37 00:31:08.887 clat percentiles (usec): 00:31:08.887 | 1.00th=[ 1516], 5.00th=[14353], 10.00th=[15533], 20.00th=[17171], 00:31:08.887 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:31:08.887 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:31:08.887 | 99.00th=[25822], 99.50th=[28967], 99.90th=[34866], 99.95th=[35390], 00:31:08.887 | 99.99th=[36963] 00:31:08.887 bw ( KiB/s): min= 2560, max= 4096, per=4.59%, avg=2915.70, stdev=502.77, samples=20 00:31:08.887 iops : min= 640, max= 1024, avg=728.90, stdev=125.70, samples=20 00:31:08.887 lat (usec) : 750=0.03%, 1000=0.05% 00:31:08.887 lat (msec) : 2=1.45%, 4=0.44%, 10=0.82%, 20=23.14%, 50=74.07% 00:31:08.887 cpu : usr=97.67%, sys=1.49%, ctx=666, majf=0, minf=40 00:31:08.887 IO depths : 1=3.9%, 2=8.7%, 4=20.3%, 8=58.5%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:08.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.887 complete : 0=0.0%, 4=92.8%, 8=1.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.887 issued rwts: total=7304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.887 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.887 filename0: (groupid=0, jobs=1): err= 0: pid=4029051: Tue Nov 26 19:37:41 2024 00:31:08.887 read: IOPS=652, BW=2610KiB/s (2673kB/s)(25.5MiB/10005msec) 00:31:08.887 slat (nsec): min=2972, max=69518, avg=16045.02, stdev=10596.50 00:31:08.887 clat (usec): min=13134, max=48109, avg=24384.35, stdev=1386.02 00:31:08.887 lat (usec): min=13142, max=48118, avg=24400.39, stdev=1385.85 00:31:08.887 clat percentiles (usec): 00:31:08.887 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:31:08.887 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:31:08.887 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:31:08.887 | 99.00th=[30278], 99.50th=[32113], 99.90th=[47973], 99.95th=[47973], 00:31:08.887 | 99.99th=[47973] 00:31:08.887 bw ( KiB/s): min= 2549, max= 2688, per=4.10%, avg=2607.42, stdev=63.39, samples=19 00:31:08.887 iops : min= 637, max= 672, avg=651.84, stdev=15.86, samples=19 00:31:08.887 lat (msec) : 20=0.23%, 50=99.77% 00:31:08.887 cpu : usr=99.08%, sys=0.66%, ctx=14, majf=0, minf=15 00:31:08.887 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:08.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.887 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.887 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.887 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.887 filename0: (groupid=0, jobs=1): err= 0: pid=4029052: Tue Nov 26 19:37:41 2024 00:31:08.887 read: IOPS=655, BW=2622KiB/s (2685kB/s)(25.6MiB/10007msec) 00:31:08.887 slat (nsec): min=3024, max=69062, avg=10073.65, stdev=7176.09 00:31:08.887 clat (usec): min=15093, max=36089, avg=24323.90, stdev=805.24 00:31:08.887 lat (usec): min=15101, max=36101, avg=24333.98, stdev=804.93 00:31:08.887 clat percentiles (usec): 00:31:08.887 | 1.00th=[23200], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:31:08.887 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:31:08.887 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:31:08.887 | 99.00th=[25560], 99.50th=[26084], 99.90th=[32113], 99.95th=[33424], 00:31:08.887 | 99.99th=[35914] 00:31:08.887 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2620.58, stdev=65.09, samples=19 00:31:08.887 iops : min= 640, max= 672, avg=655.11, stdev=16.26, samples=19 00:31:08.887 lat (msec) : 20=0.46%, 50=99.54% 00:31:08.887 cpu : usr=99.04%, sys=0.69%, ctx=16, majf=0, minf=18 00:31:08.887 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:08.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.887 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.887 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.887 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.887 filename0: (groupid=0, jobs=1): err= 0: pid=4029053: Tue Nov 26 19:37:41 2024 00:31:08.887 read: IOPS=658, BW=2634KiB/s (2697kB/s)(25.8MiB/10013msec) 00:31:08.887 slat (nsec): min=3097, max=83241, avg=18760.08, stdev=11865.43 00:31:08.887 clat (usec): min=14458, max=37411, avg=24130.68, stdev=1494.46 00:31:08.887 lat (usec): min=14467, max=37420, avg=24149.44, stdev=1495.10 00:31:08.887 clat percentiles (usec): 00:31:08.887 | 1.00th=[16712], 5.00th=[23200], 10.00th=[23725], 20.00th=[23987], 00:31:08.887 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.887 | 70.00th=[24511], 80.00th=[24773], 90.00th=[24773], 95.00th=[25035], 00:31:08.887 | 99.00th=[27657], 99.50th=[30540], 99.90th=[36963], 99.95th=[37487], 00:31:08.887 | 99.99th=[37487] 00:31:08.887 bw ( KiB/s): min= 2560, max= 2864, per=4.14%, avg=2634.95, stdev=86.17, samples=19 00:31:08.887 iops : min= 640, max= 716, avg=658.74, stdev=21.54, samples=19 00:31:08.887 lat (msec) : 20=2.58%, 50=97.42% 00:31:08.887 cpu : usr=98.92%, sys=0.80%, ctx=14, majf=0, minf=15 00:31:08.887 IO depths : 1=5.7%, 2=11.4%, 4=23.7%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:08.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.887 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.887 issued rwts: total=6594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.887 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.887 filename0: (groupid=0, jobs=1): err= 0: pid=4029054: Tue Nov 26 19:37:41 2024 00:31:08.887 read: IOPS=661, BW=2647KiB/s (2710kB/s)(25.9MiB/10005msec) 00:31:08.887 slat (nsec): min=5707, max=68253, avg=15631.25, stdev=9739.48 00:31:08.887 clat (usec): min=9082, max=36482, avg=24035.26, stdev=1714.57 00:31:08.887 lat (usec): min=9089, max=36488, avg=24050.89, stdev=1715.27 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[14222], 5.00th=[23200], 10.00th=[23725], 20.00th=[23987], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:31:08.888 | 99.00th=[27395], 99.50th=[28181], 99.90th=[32375], 99.95th=[36439], 00:31:08.888 | 99.99th=[36439] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2816, per=4.15%, avg=2636.63, stdev=75.91, samples=19 00:31:08.888 iops : min= 640, max= 704, avg=659.16, stdev=18.98, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=2.45%, 50=97.31% 00:31:08.888 cpu : usr=98.93%, sys=0.81%, ctx=25, majf=0, minf=20 00:31:08.888 IO depths : 1=5.6%, 2=11.5%, 4=23.7%, 8=52.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename0: (groupid=0, jobs=1): err= 0: pid=4029055: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=663, BW=2654KiB/s (2718kB/s)(26.0MiB/10019msec) 00:31:08.888 slat (nsec): min=5728, max=65966, avg=14634.81, stdev=9728.37 00:31:08.888 clat (usec): min=9415, max=40486, avg=23978.30, stdev=2060.26 00:31:08.888 lat (usec): min=9441, max=40493, avg=23992.94, stdev=2060.62 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[15270], 5.00th=[21103], 10.00th=[23462], 20.00th=[23725], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:31:08.888 | 99.00th=[28443], 99.50th=[33424], 99.90th=[38536], 99.95th=[40633], 00:31:08.888 | 99.99th=[40633] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2864, per=4.18%, avg=2654.40, stdev=91.24, samples=20 00:31:08.888 iops : min= 640, max= 716, avg=663.60, stdev=22.81, samples=20 00:31:08.888 lat (msec) : 10=0.14%, 20=4.00%, 50=95.86% 00:31:08.888 cpu : usr=99.09%, sys=0.64%, ctx=10, majf=0, minf=32 00:31:08.888 IO depths : 1=5.7%, 2=11.4%, 4=23.6%, 8=52.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename0: (groupid=0, jobs=1): err= 0: pid=4029056: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10005msec) 00:31:08.888 slat (nsec): min=4191, max=80645, avg=17686.27, stdev=10934.16 00:31:08.888 clat (usec): min=9701, max=39686, avg=24244.78, stdev=1291.64 00:31:08.888 lat (usec): min=9720, max=39700, avg=24262.47, stdev=1291.52 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25035], 00:31:08.888 | 99.00th=[25560], 99.50th=[26084], 99.90th=[39584], 99.95th=[39584], 00:31:08.888 | 99.99th=[39584] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2620.89, stdev=65.42, samples=19 00:31:08.888 iops : min= 640, max= 672, avg=655.21, stdev=16.37, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=0.43%, 50=99.33% 00:31:08.888 cpu : usr=98.72%, sys=0.86%, ctx=123, majf=0, minf=23 00:31:08.888 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename0: (groupid=0, jobs=1): err= 0: pid=4029057: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=652, BW=2610KiB/s (2672kB/s)(25.5MiB/10006msec) 00:31:08.888 slat (nsec): min=4122, max=78159, avg=20098.42, stdev=11877.81 00:31:08.888 clat (usec): min=16502, max=45506, avg=24339.54, stdev=1238.32 00:31:08.888 lat (usec): min=16542, max=45518, avg=24359.64, stdev=1238.01 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:31:08.888 | 99.00th=[30540], 99.50th=[31851], 99.90th=[34341], 99.95th=[45351], 00:31:08.888 | 99.99th=[45351] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2688, per=4.10%, avg=2607.68, stdev=63.04, samples=19 00:31:08.888 iops : min= 640, max= 672, avg=651.89, stdev=15.78, samples=19 00:31:08.888 lat (msec) : 20=0.47%, 50=99.53% 00:31:08.888 cpu : usr=98.69%, sys=0.89%, ctx=60, majf=0, minf=17 00:31:08.888 IO depths : 1=6.1%, 2=12.1%, 4=24.4%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename1: (groupid=0, jobs=1): err= 0: pid=4029058: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=657, BW=2632KiB/s (2695kB/s)(25.7MiB/10004msec) 00:31:08.888 slat (nsec): min=4686, max=80336, avg=18417.49, stdev=11774.37 00:31:08.888 clat (usec): min=9169, max=52320, avg=24149.99, stdev=2318.10 00:31:08.888 lat (usec): min=9194, max=52334, avg=24168.41, stdev=2318.54 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[16319], 5.00th=[22938], 10.00th=[23725], 20.00th=[23725], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:31:08.888 | 99.00th=[32637], 99.50th=[35914], 99.90th=[46924], 99.95th=[46924], 00:31:08.888 | 99.99th=[52167] 00:31:08.888 bw ( KiB/s): min= 2448, max= 2784, per=4.14%, avg=2629.89, stdev=83.00, samples=19 00:31:08.888 iops : min= 612, max= 696, avg=657.47, stdev=20.75, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=3.80%, 50=95.91%, 100=0.05% 00:31:08.888 cpu : usr=98.61%, sys=0.94%, ctx=201, majf=0, minf=19 00:31:08.888 IO depths : 1=5.5%, 2=11.1%, 4=23.1%, 8=53.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename1: (groupid=0, jobs=1): err= 0: pid=4029059: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10004msec) 00:31:08.888 slat (nsec): min=4224, max=82983, avg=17259.98, stdev=12204.45 00:31:08.888 clat (usec): min=9270, max=57899, avg=23894.84, stdev=3086.52 00:31:08.888 lat (usec): min=9291, max=57917, avg=23912.10, stdev=3087.78 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[14353], 5.00th=[17957], 10.00th=[21103], 20.00th=[23725], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25822], 00:31:08.888 | 99.00th=[33817], 99.50th=[37487], 99.90th=[46924], 99.95th=[47449], 00:31:08.888 | 99.99th=[57934] 00:31:08.888 bw ( KiB/s): min= 2544, max= 2880, per=4.19%, avg=2665.26, stdev=103.74, samples=19 00:31:08.888 iops : min= 636, max= 720, avg=666.32, stdev=25.93, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=7.98%, 50=91.75%, 100=0.03% 00:31:08.888 cpu : usr=98.50%, sys=1.00%, ctx=184, majf=0, minf=22 00:31:08.888 IO depths : 1=4.6%, 2=9.5%, 4=20.3%, 8=57.3%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=92.9%, 8=1.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename1: (groupid=0, jobs=1): err= 0: pid=4029060: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10002msec) 00:31:08.888 slat (nsec): min=4320, max=68018, avg=10390.94, stdev=8008.94 00:31:08.888 clat (usec): min=15367, max=34178, avg=24302.01, stdev=708.65 00:31:08.888 lat (usec): min=15374, max=34186, avg=24312.40, stdev=708.04 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:31:08.888 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:31:08.888 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:31:08.888 | 99.00th=[25822], 99.50th=[26346], 99.90th=[31327], 99.95th=[32375], 00:31:08.888 | 99.99th=[34341] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2693, per=4.12%, avg=2620.89, stdev=65.96, samples=19 00:31:08.888 iops : min= 640, max= 673, avg=655.21, stdev=16.47, samples=19 00:31:08.888 lat (msec) : 20=0.18%, 50=99.82% 00:31:08.888 cpu : usr=98.69%, sys=0.91%, ctx=68, majf=0, minf=21 00:31:08.888 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename1: (groupid=0, jobs=1): err= 0: pid=4029061: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=661, BW=2647KiB/s (2710kB/s)(25.9MiB/10005msec) 00:31:08.888 slat (nsec): min=4202, max=71149, avg=15369.47, stdev=11199.38 00:31:08.888 clat (usec): min=9157, max=58837, avg=24060.06, stdev=2766.37 00:31:08.888 lat (usec): min=9169, max=58849, avg=24075.43, stdev=2766.84 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[15008], 5.00th=[19006], 10.00th=[23200], 20.00th=[23725], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[26346], 00:31:08.888 | 99.00th=[32375], 99.50th=[37487], 99.90th=[47973], 99.95th=[47973], 00:31:08.888 | 99.99th=[58983] 00:31:08.888 bw ( KiB/s): min= 2501, max= 2848, per=4.15%, avg=2639.42, stdev=86.45, samples=19 00:31:08.888 iops : min= 625, max= 712, avg=659.84, stdev=21.63, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=6.13%, 50=93.60%, 100=0.03% 00:31:08.888 cpu : usr=99.00%, sys=0.73%, ctx=14, majf=0, minf=20 00:31:08.888 IO depths : 1=3.2%, 2=6.6%, 4=15.0%, 8=64.0%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=91.9%, 8=4.2%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename1: (groupid=0, jobs=1): err= 0: pid=4029062: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=658, BW=2635KiB/s (2699kB/s)(25.8MiB/10005msec) 00:31:08.888 slat (nsec): min=5800, max=78650, avg=17134.97, stdev=11066.49 00:31:08.888 clat (usec): min=7437, max=25971, avg=24135.87, stdev=1328.23 00:31:08.888 lat (usec): min=7445, max=25984, avg=24153.00, stdev=1327.90 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[19792], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:31:08.888 | 99.00th=[25560], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:31:08.888 | 99.99th=[26084] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2634.11, stdev=77.69, samples=19 00:31:08.888 iops : min= 640, max= 704, avg=658.53, stdev=19.42, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=0.94%, 50=98.82% 00:31:08.888 cpu : usr=98.79%, sys=0.82%, ctx=157, majf=0, minf=28 00:31:08.888 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename1: (groupid=0, jobs=1): err= 0: pid=4029063: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=658, BW=2635KiB/s (2699kB/s)(25.8MiB/10005msec) 00:31:08.888 slat (usec): min=5, max=563, avg=14.94, stdev=12.32 00:31:08.888 clat (usec): min=7450, max=25959, avg=24160.47, stdev=1346.50 00:31:08.888 lat (usec): min=7459, max=25966, avg=24175.41, stdev=1345.75 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[19530], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.888 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25035], 00:31:08.888 | 99.00th=[25560], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:31:08.888 | 99.99th=[26084] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2634.11, stdev=77.69, samples=19 00:31:08.888 iops : min= 640, max= 704, avg=658.53, stdev=19.42, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=0.97%, 50=98.79% 00:31:08.888 cpu : usr=98.93%, sys=0.80%, ctx=9, majf=0, minf=24 00:31:08.888 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename1: (groupid=0, jobs=1): err= 0: pid=4029064: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=664, BW=2659KiB/s (2723kB/s)(26.0MiB/10012msec) 00:31:08.888 slat (nsec): min=4148, max=73863, avg=12730.83, stdev=10826.80 00:31:08.888 clat (usec): min=8348, max=41071, avg=24003.35, stdev=3275.18 00:31:08.888 lat (usec): min=8373, max=41077, avg=24016.08, stdev=3275.44 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[14877], 5.00th=[18482], 10.00th=[19792], 20.00th=[22414], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:31:08.888 | 70.00th=[24773], 80.00th=[25035], 90.00th=[27657], 95.00th=[29492], 00:31:08.888 | 99.00th=[33162], 99.50th=[35914], 99.90th=[41157], 99.95th=[41157], 00:31:08.888 | 99.99th=[41157] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2800, per=4.18%, avg=2656.84, stdev=70.70, samples=19 00:31:08.888 iops : min= 640, max= 700, avg=664.21, stdev=17.67, samples=19 00:31:08.888 lat (msec) : 10=0.18%, 20=11.72%, 50=88.10% 00:31:08.888 cpu : usr=98.83%, sys=0.78%, ctx=124, majf=0, minf=21 00:31:08.888 IO depths : 1=0.4%, 2=0.9%, 4=3.9%, 8=79.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=89.3%, 8=8.6%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename1: (groupid=0, jobs=1): err= 0: pid=4029065: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=655, BW=2622KiB/s (2685kB/s)(25.6MiB/10006msec) 00:31:08.888 slat (nsec): min=4158, max=67823, avg=9703.38, stdev=6373.74 00:31:08.888 clat (usec): min=15766, max=32881, avg=24318.85, stdev=780.37 00:31:08.888 lat (usec): min=15785, max=32893, avg=24328.55, stdev=779.80 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[23200], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:31:08.888 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:31:08.888 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:31:08.888 | 99.00th=[25822], 99.50th=[25822], 99.90th=[32900], 99.95th=[32900], 00:31:08.888 | 99.99th=[32900] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2620.63, stdev=65.66, samples=19 00:31:08.888 iops : min= 640, max= 672, avg=655.16, stdev=16.42, samples=19 00:31:08.888 lat (msec) : 20=0.27%, 50=99.73% 00:31:08.888 cpu : usr=98.92%, sys=0.81%, ctx=15, majf=0, minf=20 00:31:08.888 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename2: (groupid=0, jobs=1): err= 0: pid=4029066: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=656, BW=2624KiB/s (2687kB/s)(25.6MiB/10009msec) 00:31:08.888 slat (nsec): min=4115, max=78208, avg=15351.90, stdev=10218.00 00:31:08.888 clat (usec): min=9453, max=43458, avg=24266.40, stdev=3052.32 00:31:08.888 lat (usec): min=9459, max=43470, avg=24281.75, stdev=3052.78 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[14484], 5.00th=[18482], 10.00th=[23462], 20.00th=[23725], 00:31:08.888 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:31:08.888 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25297], 95.00th=[30540], 00:31:08.888 | 99.00th=[34341], 99.50th=[35390], 99.90th=[39584], 99.95th=[39584], 00:31:08.888 | 99.99th=[43254] 00:31:08.888 bw ( KiB/s): min= 2560, max= 2736, per=4.12%, avg=2616.42, stdev=64.52, samples=19 00:31:08.888 iops : min= 640, max= 684, avg=654.11, stdev=16.13, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=6.08%, 50=93.68% 00:31:08.888 cpu : usr=98.51%, sys=1.09%, ctx=56, majf=0, minf=18 00:31:08.888 IO depths : 1=3.6%, 2=8.2%, 4=20.1%, 8=58.7%, 16=9.3%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=93.1%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename2: (groupid=0, jobs=1): err= 0: pid=4029067: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=679, BW=2720KiB/s (2785kB/s)(26.6MiB/10004msec) 00:31:08.888 slat (nsec): min=4414, max=75227, avg=15424.21, stdev=10344.60 00:31:08.888 clat (usec): min=7970, max=47302, avg=23400.02, stdev=3491.27 00:31:08.888 lat (usec): min=7976, max=47314, avg=23415.44, stdev=3493.28 00:31:08.888 clat percentiles (usec): 00:31:08.888 | 1.00th=[14091], 5.00th=[16319], 10.00th=[18220], 20.00th=[23462], 00:31:08.888 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:31:08.888 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25560], 00:31:08.888 | 99.00th=[34341], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:31:08.888 | 99.99th=[47449] 00:31:08.888 bw ( KiB/s): min= 2432, max= 3168, per=4.24%, avg=2698.95, stdev=161.51, samples=19 00:31:08.888 iops : min= 608, max= 792, avg=674.74, stdev=40.38, samples=19 00:31:08.888 lat (msec) : 10=0.24%, 20=13.95%, 50=85.81% 00:31:08.888 cpu : usr=98.87%, sys=0.84%, ctx=44, majf=0, minf=18 00:31:08.888 IO depths : 1=3.6%, 2=7.8%, 4=18.2%, 8=60.7%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:08.888 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 complete : 0=0.0%, 4=92.3%, 8=2.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.888 issued rwts: total=6802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.888 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.888 filename2: (groupid=0, jobs=1): err= 0: pid=4029068: Tue Nov 26 19:37:41 2024 00:31:08.888 read: IOPS=655, BW=2623KiB/s (2686kB/s)(25.6MiB/10004msec) 00:31:08.888 slat (nsec): min=4090, max=54742, avg=16673.64, stdev=9287.85 00:31:08.888 clat (usec): min=19680, max=26373, avg=24252.40, stdev=511.70 00:31:08.888 lat (usec): min=19696, max=26380, avg=24269.07, stdev=511.17 00:31:08.888 clat percentiles (usec): 00:31:08.889 | 1.00th=[22938], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:31:08.889 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.889 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:31:08.889 | 99.00th=[25560], 99.50th=[25822], 99.90th=[26346], 99.95th=[26346], 00:31:08.889 | 99.99th=[26346] 00:31:08.889 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2620.32, stdev=65.33, samples=19 00:31:08.889 iops : min= 640, max= 672, avg=655.05, stdev=16.31, samples=19 00:31:08.889 lat (msec) : 20=0.23%, 50=99.77% 00:31:08.889 cpu : usr=98.88%, sys=0.85%, ctx=49, majf=0, minf=19 00:31:08.889 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 issued rwts: total=6560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.889 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.889 filename2: (groupid=0, jobs=1): err= 0: pid=4029069: Tue Nov 26 19:37:41 2024 00:31:08.889 read: IOPS=657, BW=2630KiB/s (2693kB/s)(25.7MiB/10012msec) 00:31:08.889 slat (nsec): min=4151, max=67073, avg=13606.52, stdev=8136.64 00:31:08.889 clat (usec): min=14882, max=39336, avg=24215.44, stdev=1211.81 00:31:08.889 lat (usec): min=14888, max=39343, avg=24229.05, stdev=1211.47 00:31:08.889 clat percentiles (usec): 00:31:08.889 | 1.00th=[18220], 5.00th=[23462], 10.00th=[23725], 20.00th=[23987], 00:31:08.889 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.889 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:31:08.889 | 99.00th=[26084], 99.50th=[26346], 99.90th=[33817], 99.95th=[39060], 00:31:08.889 | 99.99th=[39584] 00:31:08.889 bw ( KiB/s): min= 2560, max= 2736, per=4.14%, avg=2630.16, stdev=68.69, samples=19 00:31:08.889 iops : min= 640, max= 684, avg=657.53, stdev=17.19, samples=19 00:31:08.889 lat (msec) : 20=1.34%, 50=98.66% 00:31:08.889 cpu : usr=98.95%, sys=0.78%, ctx=12, majf=0, minf=23 00:31:08.889 IO depths : 1=6.1%, 2=12.2%, 4=24.5%, 8=50.7%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 issued rwts: total=6582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.889 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.889 filename2: (groupid=0, jobs=1): err= 0: pid=4029070: Tue Nov 26 19:37:41 2024 00:31:08.889 read: IOPS=650, BW=2603KiB/s (2665kB/s)(25.4MiB/10006msec) 00:31:08.889 slat (nsec): min=4203, max=65577, avg=14677.01, stdev=11701.43 00:31:08.889 clat (usec): min=7947, max=55020, avg=24522.41, stdev=3846.37 00:31:08.889 lat (usec): min=7953, max=55032, avg=24537.09, stdev=3846.02 00:31:08.889 clat percentiles (usec): 00:31:08.889 | 1.00th=[15926], 5.00th=[18744], 10.00th=[19792], 20.00th=[23200], 00:31:08.889 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24511], 00:31:08.889 | 70.00th=[24773], 80.00th=[25822], 90.00th=[28967], 95.00th=[31065], 00:31:08.889 | 99.00th=[36439], 99.50th=[39060], 99.90th=[54789], 99.95th=[54789], 00:31:08.889 | 99.99th=[54789] 00:31:08.889 bw ( KiB/s): min= 2176, max= 2720, per=4.08%, avg=2595.79, stdev=127.17, samples=19 00:31:08.889 iops : min= 544, max= 680, avg=648.95, stdev=31.79, samples=19 00:31:08.889 lat (msec) : 10=0.09%, 20=10.75%, 50=88.91%, 100=0.25% 00:31:08.889 cpu : usr=98.40%, sys=1.16%, ctx=134, majf=0, minf=45 00:31:08.889 IO depths : 1=0.4%, 2=0.9%, 4=4.1%, 8=79.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:31:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 complete : 0=0.0%, 4=89.3%, 8=8.5%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 issued rwts: total=6511,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.889 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.889 filename2: (groupid=0, jobs=1): err= 0: pid=4029071: Tue Nov 26 19:37:41 2024 00:31:08.889 read: IOPS=658, BW=2635KiB/s (2699kB/s)(25.8MiB/10005msec) 00:31:08.889 slat (nsec): min=5830, max=73866, avg=14136.27, stdev=10077.25 00:31:08.889 clat (usec): min=7424, max=25935, avg=24169.90, stdev=1337.85 00:31:08.889 lat (usec): min=7432, max=25951, avg=24184.03, stdev=1337.12 00:31:08.889 clat percentiles (usec): 00:31:08.889 | 1.00th=[19792], 5.00th=[23725], 10.00th=[23725], 20.00th=[23987], 00:31:08.889 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24249], 60.00th=[24249], 00:31:08.889 | 70.00th=[24511], 80.00th=[24773], 90.00th=[24773], 95.00th=[25035], 00:31:08.889 | 99.00th=[25560], 99.50th=[25822], 99.90th=[25822], 99.95th=[25822], 00:31:08.889 | 99.99th=[25822] 00:31:08.889 bw ( KiB/s): min= 2560, max= 2816, per=4.14%, avg=2634.11, stdev=77.69, samples=19 00:31:08.889 iops : min= 640, max= 704, avg=658.53, stdev=19.42, samples=19 00:31:08.889 lat (msec) : 10=0.24%, 20=0.85%, 50=98.91% 00:31:08.889 cpu : usr=99.12%, sys=0.63%, ctx=15, majf=0, minf=30 00:31:08.889 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 issued rwts: total=6592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.889 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.889 filename2: (groupid=0, jobs=1): err= 0: pid=4029072: Tue Nov 26 19:37:41 2024 00:31:08.889 read: IOPS=656, BW=2625KiB/s (2688kB/s)(25.6MiB/10006msec) 00:31:08.889 slat (nsec): min=4159, max=76090, avg=16363.91, stdev=11605.81 00:31:08.889 clat (usec): min=14612, max=37820, avg=24233.30, stdev=1096.16 00:31:08.889 lat (usec): min=14623, max=37829, avg=24249.66, stdev=1095.10 00:31:08.889 clat percentiles (usec): 00:31:08.889 | 1.00th=[19792], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:31:08.889 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.889 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25297], 00:31:08.889 | 99.00th=[25822], 99.50th=[28705], 99.90th=[33817], 99.95th=[38011], 00:31:08.889 | 99.99th=[38011] 00:31:08.889 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2623.16, stdev=64.11, samples=19 00:31:08.889 iops : min= 640, max= 672, avg=655.79, stdev=16.03, samples=19 00:31:08.889 lat (msec) : 20=1.01%, 50=98.99% 00:31:08.889 cpu : usr=98.99%, sys=0.74%, ctx=35, majf=0, minf=21 00:31:08.889 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 issued rwts: total=6566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.889 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.889 filename2: (groupid=0, jobs=1): err= 0: pid=4029073: Tue Nov 26 19:37:41 2024 00:31:08.889 read: IOPS=689, BW=2758KiB/s (2824kB/s)(27.0MiB/10021msec) 00:31:08.889 slat (nsec): min=4043, max=63767, avg=8285.84, stdev=4832.50 00:31:08.889 clat (usec): min=8295, max=35515, avg=23137.27, stdev=3292.40 00:31:08.889 lat (usec): min=8302, max=35523, avg=23145.56, stdev=3292.91 00:31:08.889 clat percentiles (usec): 00:31:08.889 | 1.00th=[ 9503], 5.00th=[15533], 10.00th=[18482], 20.00th=[23725], 00:31:08.889 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:31:08.889 | 70.00th=[24511], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:31:08.889 | 99.00th=[27395], 99.50th=[27919], 99.90th=[29754], 99.95th=[35390], 00:31:08.889 | 99.99th=[35390] 00:31:08.889 bw ( KiB/s): min= 2560, max= 3264, per=4.34%, avg=2757.85, stdev=236.58, samples=20 00:31:08.889 iops : min= 640, max= 816, avg=689.45, stdev=59.15, samples=20 00:31:08.889 lat (msec) : 10=1.58%, 20=10.93%, 50=87.50% 00:31:08.889 cpu : usr=99.15%, sys=0.55%, ctx=68, majf=0, minf=24 00:31:08.889 IO depths : 1=4.8%, 2=9.9%, 4=21.2%, 8=56.3%, 16=7.9%, 32=0.0%, >=64=0.0% 00:31:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 complete : 0=0.0%, 4=93.0%, 8=1.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.889 issued rwts: total=6910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.889 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:08.889 00:31:08.889 Run status group 0 (all jobs): 00:31:08.889 READ: bw=62.1MiB/s (65.1MB/s), 2603KiB/s-2915KiB/s (2665kB/s-2985kB/s), io=622MiB (652MB), run=10002-10022msec 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 bdev_null0 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 [2024-11-26 19:37:41.242523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 bdev_null1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:08.889 { 00:31:08.889 "params": { 00:31:08.889 "name": "Nvme$subsystem", 00:31:08.889 "trtype": "$TEST_TRANSPORT", 00:31:08.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.889 "adrfam": "ipv4", 00:31:08.889 "trsvcid": "$NVMF_PORT", 00:31:08.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.889 "hdgst": ${hdgst:-false}, 00:31:08.889 "ddgst": ${ddgst:-false} 00:31:08.889 }, 00:31:08.889 "method": "bdev_nvme_attach_controller" 00:31:08.889 } 00:31:08.889 EOF 00:31:08.889 )") 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:08.889 { 00:31:08.889 "params": { 00:31:08.889 "name": "Nvme$subsystem", 00:31:08.889 "trtype": "$TEST_TRANSPORT", 00:31:08.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.889 "adrfam": "ipv4", 00:31:08.889 "trsvcid": "$NVMF_PORT", 00:31:08.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.889 "hdgst": ${hdgst:-false}, 00:31:08.889 "ddgst": ${ddgst:-false} 00:31:08.889 }, 00:31:08.889 "method": "bdev_nvme_attach_controller" 00:31:08.889 } 00:31:08.889 EOF 00:31:08.889 )") 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:31:08.889 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:08.890 "params": { 00:31:08.890 "name": "Nvme0", 00:31:08.890 "trtype": "tcp", 00:31:08.890 "traddr": "10.0.0.2", 00:31:08.890 "adrfam": "ipv4", 00:31:08.890 "trsvcid": "4420", 00:31:08.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.890 "hdgst": false, 00:31:08.890 "ddgst": false 00:31:08.890 }, 00:31:08.890 "method": "bdev_nvme_attach_controller" 00:31:08.890 },{ 00:31:08.890 "params": { 00:31:08.890 "name": "Nvme1", 00:31:08.890 "trtype": "tcp", 00:31:08.890 "traddr": "10.0.0.2", 00:31:08.890 "adrfam": "ipv4", 00:31:08.890 "trsvcid": "4420", 00:31:08.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:08.890 "hdgst": false, 00:31:08.890 "ddgst": false 00:31:08.890 }, 00:31:08.890 "method": "bdev_nvme_attach_controller" 00:31:08.890 }' 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:08.890 19:37:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.890 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:08.890 ... 00:31:08.890 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:08.890 ... 00:31:08.890 fio-3.35 00:31:08.890 Starting 4 threads 00:31:14.325 00:31:14.325 filename0: (groupid=0, jobs=1): err= 0: pid=4031691: Tue Nov 26 19:37:47 2024 00:31:14.325 read: IOPS=2904, BW=22.7MiB/s (23.8MB/s)(113MiB/5001msec) 00:31:14.325 slat (nsec): min=2922, max=46004, avg=6355.48, stdev=2205.47 00:31:14.325 clat (usec): min=1486, max=4690, avg=2736.95, stdev=266.69 00:31:14.325 lat (usec): min=1491, max=4695, avg=2743.30, stdev=266.65 00:31:14.325 clat percentiles (usec): 00:31:14.325 | 1.00th=[ 2089], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2638], 00:31:14.325 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:31:14.325 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 3130], 00:31:14.325 | 99.00th=[ 3884], 99.50th=[ 4113], 99.90th=[ 4490], 99.95th=[ 4555], 00:31:14.325 | 99.99th=[ 4686] 00:31:14.325 bw ( KiB/s): min=23038, max=23376, per=24.89%, avg=23231.80, stdev=133.76, samples=10 00:31:14.325 iops : min= 2879, max= 2922, avg=2903.90, stdev=16.84, samples=10 00:31:14.325 lat (msec) : 2=0.57%, 4=98.79%, 10=0.64% 00:31:14.325 cpu : usr=96.16%, sys=2.88%, ctx=252, majf=0, minf=71 00:31:14.325 IO depths : 1=0.1%, 2=0.2%, 4=72.4%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.325 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.325 issued rwts: total=14525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.325 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:14.325 filename0: (groupid=0, jobs=1): err= 0: pid=4031692: Tue Nov 26 19:37:47 2024 00:31:14.325 read: IOPS=2883, BW=22.5MiB/s (23.6MB/s)(113MiB/5001msec) 00:31:14.325 slat (nsec): min=2915, max=38199, avg=6139.12, stdev=1922.72 00:31:14.325 clat (usec): min=1324, max=6815, avg=2758.36, stdev=255.55 00:31:14.325 lat (usec): min=1330, max=6824, avg=2764.50, stdev=255.58 00:31:14.325 clat percentiles (usec): 00:31:14.325 | 1.00th=[ 2212], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2671], 00:31:14.325 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:31:14.325 | 70.00th=[ 2737], 80.00th=[ 2802], 90.00th=[ 2999], 95.00th=[ 3163], 00:31:14.325 | 99.00th=[ 3884], 99.50th=[ 4146], 99.90th=[ 4490], 99.95th=[ 4883], 00:31:14.325 | 99.99th=[ 4883] 00:31:14.325 bw ( KiB/s): min=22816, max=23216, per=24.70%, avg=23060.60, stdev=134.36, samples=10 00:31:14.325 iops : min= 2852, max= 2902, avg=2882.50, stdev=16.82, samples=10 00:31:14.325 lat (msec) : 2=0.35%, 4=98.94%, 10=0.71% 00:31:14.325 cpu : usr=96.96%, sys=2.80%, ctx=6, majf=0, minf=53 00:31:14.325 IO depths : 1=0.1%, 2=0.1%, 4=72.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.325 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.325 issued rwts: total=14418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.325 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:14.325 filename1: (groupid=0, jobs=1): err= 0: pid=4031694: Tue Nov 26 19:37:47 2024 00:31:14.325 read: IOPS=3003, BW=23.5MiB/s (24.6MB/s)(117MiB/5002msec) 00:31:14.325 slat (nsec): min=2909, max=29267, avg=6728.44, stdev=3223.12 00:31:14.325 clat (usec): min=1121, max=5425, avg=2646.50, stdev=310.13 00:31:14.325 lat (usec): min=1127, max=5441, avg=2653.23, stdev=310.04 00:31:14.325 clat percentiles (usec): 00:31:14.325 | 1.00th=[ 1844], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2442], 00:31:14.325 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:31:14.325 | 70.00th=[ 2737], 80.00th=[ 2769], 90.00th=[ 2900], 95.00th=[ 3195], 00:31:14.325 | 99.00th=[ 3654], 99.50th=[ 3720], 99.90th=[ 4178], 99.95th=[ 4359], 00:31:14.325 | 99.99th=[ 5407] 00:31:14.325 bw ( KiB/s): min=23664, max=24400, per=25.74%, avg=24028.80, stdev=218.71, samples=10 00:31:14.325 iops : min= 2958, max= 3050, avg=3003.60, stdev=27.34, samples=10 00:31:14.325 lat (msec) : 2=2.44%, 4=97.40%, 10=0.16% 00:31:14.325 cpu : usr=97.38%, sys=2.38%, ctx=7, majf=0, minf=23 00:31:14.325 IO depths : 1=0.1%, 2=0.5%, 4=69.1%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.325 complete : 0=0.0%, 4=94.8%, 8=5.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.325 issued rwts: total=15023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.325 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:14.325 filename1: (groupid=0, jobs=1): err= 0: pid=4031696: Tue Nov 26 19:37:47 2024 00:31:14.325 read: IOPS=2879, BW=22.5MiB/s (23.6MB/s)(113MiB/5002msec) 00:31:14.325 slat (nsec): min=2890, max=29604, avg=6638.45, stdev=3107.25 00:31:14.325 clat (usec): min=1443, max=5313, avg=2760.54, stdev=281.70 00:31:14.325 lat (usec): min=1448, max=5324, avg=2767.18, stdev=281.65 00:31:14.325 clat percentiles (usec): 00:31:14.325 | 1.00th=[ 2147], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2638], 00:31:14.325 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:31:14.325 | 70.00th=[ 2769], 80.00th=[ 2835], 90.00th=[ 2999], 95.00th=[ 3228], 00:31:14.325 | 99.00th=[ 3982], 99.50th=[ 4228], 99.90th=[ 4752], 99.95th=[ 5014], 00:31:14.325 | 99.99th=[ 5276] 00:31:14.325 bw ( KiB/s): min=22800, max=23200, per=24.67%, avg=23028.80, stdev=127.12, samples=10 00:31:14.325 iops : min= 2850, max= 2900, avg=2878.60, stdev=15.89, samples=10 00:31:14.325 lat (msec) : 2=0.35%, 4=98.72%, 10=0.93% 00:31:14.325 cpu : usr=96.98%, sys=2.78%, ctx=6, majf=0, minf=34 00:31:14.325 IO depths : 1=0.1%, 2=0.4%, 4=72.5%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:14.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.325 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:14.325 issued rwts: total=14401,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:14.325 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:14.325 00:31:14.325 Run status group 0 (all jobs): 00:31:14.325 READ: bw=91.2MiB/s (95.6MB/s), 22.5MiB/s-23.5MiB/s (23.6MB/s-24.6MB/s), io=456MiB (478MB), run=5001-5002msec 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.325 00:31:14.325 real 0m23.950s 00:31:14.325 user 5m5.960s 00:31:14.325 sys 0m3.952s 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.325 19:37:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:14.325 ************************************ 00:31:14.325 END TEST fio_dif_rand_params 00:31:14.325 ************************************ 00:31:14.325 19:37:47 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:14.325 19:37:47 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:14.325 19:37:47 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.325 19:37:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:14.325 ************************************ 00:31:14.325 START TEST fio_dif_digest 00:31:14.325 ************************************ 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:14.325 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:14.326 bdev_null0 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:14.326 [2024-11-26 19:37:47.439938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:14.326 { 00:31:14.326 "params": { 00:31:14.326 "name": "Nvme$subsystem", 00:31:14.326 "trtype": "$TEST_TRANSPORT", 00:31:14.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.326 "adrfam": "ipv4", 00:31:14.326 "trsvcid": "$NVMF_PORT", 00:31:14.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.326 "hdgst": ${hdgst:-false}, 00:31:14.326 "ddgst": ${ddgst:-false} 00:31:14.326 }, 00:31:14.326 "method": "bdev_nvme_attach_controller" 00:31:14.326 } 00:31:14.326 EOF 00:31:14.326 )") 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:14.326 "params": { 00:31:14.326 "name": "Nvme0", 00:31:14.326 "trtype": "tcp", 00:31:14.326 "traddr": "10.0.0.2", 00:31:14.326 "adrfam": "ipv4", 00:31:14.326 "trsvcid": "4420", 00:31:14.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.326 "hdgst": true, 00:31:14.326 "ddgst": true 00:31:14.326 }, 00:31:14.326 "method": "bdev_nvme_attach_controller" 00:31:14.326 }' 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:14.326 19:37:47 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:14.326 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:14.326 ... 00:31:14.326 fio-3.35 00:31:14.326 Starting 3 threads 00:31:26.546 00:31:26.546 filename0: (groupid=0, jobs=1): err= 0: pid=4033088: Tue Nov 26 19:37:58 2024 00:31:26.546 read: IOPS=286, BW=35.9MiB/s (37.6MB/s)(360MiB/10047msec) 00:31:26.546 slat (nsec): min=4532, max=50365, avg=7011.53, stdev=1668.15 00:31:26.546 clat (usec): min=6478, max=49878, avg=10435.25, stdev=1421.44 00:31:26.546 lat (usec): min=6485, max=49884, avg=10442.26, stdev=1421.43 00:31:26.546 clat percentiles (usec): 00:31:26.546 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9634], 00:31:26.546 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:31:26.546 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:31:26.546 | 99.00th=[13042], 99.50th=[13304], 99.90th=[14353], 99.95th=[47973], 00:31:26.546 | 99.99th=[50070] 00:31:26.546 bw ( KiB/s): min=35072, max=38733, per=32.42%, avg=36855.05, stdev=919.80, samples=20 00:31:26.546 iops : min= 274, max= 302, avg=287.90, stdev= 7.12, samples=20 00:31:26.546 lat (msec) : 10=32.62%, 20=67.31%, 50=0.07% 00:31:26.546 cpu : usr=95.44%, sys=4.30%, ctx=42, majf=0, minf=129 00:31:26.546 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.546 issued rwts: total=2882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.546 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.546 filename0: (groupid=0, jobs=1): err= 0: pid=4033089: Tue Nov 26 19:37:58 2024 00:31:26.546 read: IOPS=296, BW=37.0MiB/s (38.8MB/s)(372MiB/10047msec) 00:31:26.546 slat (nsec): min=3256, max=72686, avg=6806.10, stdev=1598.73 00:31:26.546 clat (usec): min=6707, max=50467, avg=10105.16, stdev=1410.09 00:31:26.546 lat (usec): min=6714, max=50474, avg=10111.96, stdev=1410.06 00:31:26.546 clat percentiles (usec): 00:31:26.546 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:31:26.546 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:31:26.546 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11338], 95.00th=[11600], 00:31:26.546 | 99.00th=[12256], 99.50th=[12780], 99.90th=[16057], 99.95th=[48497], 00:31:26.546 | 99.99th=[50594] 00:31:26.546 bw ( KiB/s): min=36608, max=41984, per=33.49%, avg=38067.20, stdev=1342.09, samples=20 00:31:26.546 iops : min= 286, max= 328, avg=297.40, stdev=10.49, samples=20 00:31:26.546 lat (msec) : 10=45.80%, 20=54.13%, 50=0.03%, 100=0.03% 00:31:26.546 cpu : usr=95.86%, sys=3.89%, ctx=18, majf=0, minf=203 00:31:26.546 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.546 issued rwts: total=2976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.546 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.546 filename0: (groupid=0, jobs=1): err= 0: pid=4033090: Tue Nov 26 19:37:58 2024 00:31:26.546 read: IOPS=305, BW=38.1MiB/s (40.0MB/s)(383MiB/10043msec) 00:31:26.546 slat (nsec): min=4454, max=28589, avg=7514.76, stdev=1507.42 00:31:26.546 clat (usec): min=6455, max=54487, avg=9810.52, stdev=1968.75 00:31:26.546 lat (usec): min=6462, max=54497, avg=9818.03, stdev=1968.81 00:31:26.546 clat percentiles (usec): 00:31:26.546 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8848], 00:31:26.546 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:31:26.546 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731], 00:31:26.546 | 99.00th=[12649], 99.50th=[13173], 99.90th=[46400], 99.95th=[54264], 00:31:26.546 | 99.99th=[54264] 00:31:26.546 bw ( KiB/s): min=34491, max=41472, per=34.48%, avg=39190.15, stdev=1913.59, samples=20 00:31:26.546 iops : min= 269, max= 324, avg=306.15, stdev=15.01, samples=20 00:31:26.546 lat (msec) : 10=62.30%, 20=37.53%, 50=0.07%, 100=0.10% 00:31:26.546 cpu : usr=95.63%, sys=4.10%, ctx=26, majf=0, minf=169 00:31:26.546 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.546 issued rwts: total=3064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.546 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.546 00:31:26.546 Run status group 0 (all jobs): 00:31:26.546 READ: bw=111MiB/s (116MB/s), 35.9MiB/s-38.1MiB/s (37.6MB/s-40.0MB/s), io=1115MiB (1169MB), run=10043-10047msec 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.546 00:31:26.546 real 0m10.962s 00:31:26.546 user 0m39.399s 00:31:26.546 sys 0m1.499s 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:26.546 19:37:58 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:26.546 ************************************ 00:31:26.546 END TEST fio_dif_digest 00:31:26.546 ************************************ 00:31:26.546 19:37:58 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:26.546 19:37:58 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:26.546 rmmod nvme_tcp 00:31:26.546 rmmod nvme_fabrics 00:31:26.546 rmmod nvme_keyring 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 4021663 ']' 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 4021663 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 4021663 ']' 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 4021663 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4021663 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4021663' 00:31:26.546 killing process with pid 4021663 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@973 -- # kill 4021663 00:31:26.546 19:37:58 nvmf_dif -- common/autotest_common.sh@978 -- # wait 4021663 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:31:26.546 19:37:58 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:26.808 Waiting for block devices as requested 00:31:26.808 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:26.808 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:27.067 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:27.067 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:27.067 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:27.067 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:27.324 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:27.324 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:27.324 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:27.582 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:27.582 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:27.582 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:27.582 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:27.582 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:27.841 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:27.841 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:27.841 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:28.100 19:38:01 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.100 19:38:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:28.100 19:38:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.634 19:38:03 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:30.634 00:31:30.634 real 1m12.153s 00:31:30.634 user 7m39.701s 00:31:30.634 sys 0m17.474s 00:31:30.634 19:38:03 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:30.634 19:38:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.634 ************************************ 00:31:30.634 END TEST nvmf_dif 00:31:30.634 ************************************ 00:31:30.634 19:38:03 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:30.634 19:38:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:30.634 19:38:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:30.634 19:38:03 -- common/autotest_common.sh@10 -- # set +x 00:31:30.634 ************************************ 00:31:30.634 START TEST nvmf_abort_qd_sizes 00:31:30.634 ************************************ 00:31:30.634 19:38:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:30.634 * Looking for test storage... 00:31:30.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:30.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.634 --rc genhtml_branch_coverage=1 00:31:30.634 --rc genhtml_function_coverage=1 00:31:30.634 --rc genhtml_legend=1 00:31:30.634 --rc geninfo_all_blocks=1 00:31:30.634 --rc geninfo_unexecuted_blocks=1 00:31:30.634 00:31:30.634 ' 00:31:30.634 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:30.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.634 --rc genhtml_branch_coverage=1 00:31:30.634 --rc genhtml_function_coverage=1 00:31:30.634 --rc genhtml_legend=1 00:31:30.634 --rc geninfo_all_blocks=1 00:31:30.634 --rc geninfo_unexecuted_blocks=1 00:31:30.634 00:31:30.634 ' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:30.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.635 --rc genhtml_branch_coverage=1 00:31:30.635 --rc genhtml_function_coverage=1 00:31:30.635 --rc genhtml_legend=1 00:31:30.635 --rc geninfo_all_blocks=1 00:31:30.635 --rc geninfo_unexecuted_blocks=1 00:31:30.635 00:31:30.635 ' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:30.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.635 --rc genhtml_branch_coverage=1 00:31:30.635 --rc genhtml_function_coverage=1 00:31:30.635 --rc genhtml_legend=1 00:31:30.635 --rc geninfo_all_blocks=1 00:31:30.635 --rc geninfo_unexecuted_blocks=1 00:31:30.635 00:31:30.635 ' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:30.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:31:30.635 19:38:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:35.908 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:35.908 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:35.908 Found net devices under 0000:31:00.0: cvl_0_0 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:35.908 Found net devices under 0000:31:00.1: cvl_0_1 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:35.908 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:35.908 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:31:35.908 00:31:35.908 --- 10.0.0.2 ping statistics --- 00:31:35.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.908 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:35.908 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:35.908 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:31:35.908 00:31:35.908 --- 10.0.0.1 ping statistics --- 00:31:35.908 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:35.908 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:31:35.908 19:38:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:37.813 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:37.813 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:38.072 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:38.072 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:38.330 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=4043060 00:31:38.331 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 4043060 00:31:38.331 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 4043060 ']' 00:31:38.331 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.331 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:38.331 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.331 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:38.331 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:38.331 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:38.331 [2024-11-26 19:38:12.123166] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:31:38.331 [2024-11-26 19:38:12.123236] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:38.591 [2024-11-26 19:38:12.214838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:38.591 [2024-11-26 19:38:12.268817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:38.591 [2024-11-26 19:38:12.268867] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:38.591 [2024-11-26 19:38:12.268877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:38.591 [2024-11-26 19:38:12.268884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:38.591 [2024-11-26 19:38:12.268890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:38.591 [2024-11-26 19:38:12.270968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.591 [2024-11-26 19:38:12.271145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:38.591 [2024-11-26 19:38:12.271233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.591 [2024-11-26 19:38:12.271232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:39.162 19:38:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:39.162 ************************************ 00:31:39.162 START TEST spdk_target_abort 00:31:39.162 ************************************ 00:31:39.162 19:38:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:31:39.162 19:38:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:39.162 19:38:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:39.162 19:38:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.162 19:38:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.730 spdk_targetn1 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.730 [2024-11-26 19:38:13.295247] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.730 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.731 [2024-11-26 19:38:13.335526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:39.731 19:38:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:39.731 [2024-11-26 19:38:13.473584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:512 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:31:39.731 [2024-11-26 19:38:13.473619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0043 p:1 m:0 dnr:0 00:31:39.731 [2024-11-26 19:38:13.489611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1024 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:31:39.731 [2024-11-26 19:38:13.489634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0083 p:1 m:0 dnr:0 00:31:39.731 [2024-11-26 19:38:13.491473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1136 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:31:39.731 [2024-11-26 19:38:13.491492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0091 p:1 m:0 dnr:0 00:31:39.731 [2024-11-26 19:38:13.497659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1288 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:31:39.731 [2024-11-26 19:38:13.497679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00a3 p:1 m:0 dnr:0 00:31:39.731 [2024-11-26 19:38:13.514385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1848 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:31:39.731 [2024-11-26 19:38:13.514406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e8 p:1 m:0 dnr:0 00:31:39.731 [2024-11-26 19:38:13.545639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2848 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:31:39.731 [2024-11-26 19:38:13.545661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:39.731 [2024-11-26 19:38:13.561618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3376 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:31:39.731 [2024-11-26 19:38:13.561639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00a9 p:0 m:0 dnr:0 00:31:39.731 [2024-11-26 19:38:13.569643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3608 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:31:39.731 [2024-11-26 19:38:13.569663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00c7 p:0 m:0 dnr:0 00:31:43.023 Initializing NVMe Controllers 00:31:43.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:43.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:43.023 Initialization complete. Launching workers. 00:31:43.023 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11785, failed: 8 00:31:43.023 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2798, failed to submit 8995 00:31:43.023 success 693, unsuccessful 2105, failed 0 00:31:43.023 19:38:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:43.023 19:38:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:43.023 [2024-11-26 19:38:16.602597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200004e56000 PRP2 0x0 00:31:43.023 [2024-11-26 19:38:16.602627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0023 p:1 m:0 dnr:0 00:31:43.023 [2024-11-26 19:38:16.610977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:312 len:8 PRP1 0x200004e54000 PRP2 0x0 00:31:43.023 [2024-11-26 19:38:16.610996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0035 p:1 m:0 dnr:0 00:31:43.023 [2024-11-26 19:38:16.634950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:920 len:8 PRP1 0x200004e58000 PRP2 0x0 00:31:43.023 [2024-11-26 19:38:16.634970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0077 p:1 m:0 dnr:0 00:31:43.023 [2024-11-26 19:38:16.729865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:3056 len:8 PRP1 0x200004e46000 PRP2 0x0 00:31:43.023 [2024-11-26 19:38:16.729885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0081 p:0 m:0 dnr:0 00:31:46.310 Initializing NVMe Controllers 00:31:46.310 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:46.310 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:46.310 Initialization complete. Launching workers. 00:31:46.310 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8490, failed: 4 00:31:46.310 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1207, failed to submit 7287 00:31:46.310 success 333, unsuccessful 874, failed 0 00:31:46.310 19:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:46.310 19:38:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:46.569 [2024-11-26 19:38:20.406751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:163 nsid:1 lba:46408 len:8 PRP1 0x200004ae8000 PRP2 0x0 00:31:46.569 [2024-11-26 19:38:20.406783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:163 cdw0:0 sqhd:00bc p:1 m:0 dnr:0 00:31:48.478 [2024-11-26 19:38:21.884700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:160 nsid:1 lba:218624 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:31:48.478 [2024-11-26 19:38:21.884723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:160 cdw0:0 sqhd:00df p:1 m:0 dnr:0 00:31:48.738 [2024-11-26 19:38:22.351418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:272944 len:8 PRP1 0x200004ade000 PRP2 0x0 00:31:48.738 [2024-11-26 19:38:22.351439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:49.306 Initializing NVMe Controllers 00:31:49.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:49.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:49.306 Initialization complete. Launching workers. 00:31:49.306 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43870, failed: 3 00:31:49.306 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2644, failed to submit 41229 00:31:49.306 success 591, unsuccessful 2053, failed 0 00:31:49.306 19:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:49.306 19:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.306 19:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:49.306 19:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.306 19:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:49.307 19:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.307 19:38:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 4043060 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 4043060 ']' 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 4043060 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4043060 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4043060' 00:31:51.216 killing process with pid 4043060 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 4043060 00:31:51.216 19:38:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 4043060 00:31:51.216 00:31:51.216 real 0m12.037s 00:31:51.216 user 0m48.824s 00:31:51.216 sys 0m1.966s 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:51.216 ************************************ 00:31:51.216 END TEST spdk_target_abort 00:31:51.216 ************************************ 00:31:51.216 19:38:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:51.216 19:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:51.216 19:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:51.216 19:38:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:51.216 ************************************ 00:31:51.216 START TEST kernel_target_abort 00:31:51.216 ************************************ 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:31:51.216 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:31:51.476 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:51.476 19:38:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:54.015 Waiting for block devices as requested 00:31:54.015 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:54.015 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:54.015 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:54.015 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:54.015 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:54.015 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:54.015 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:54.015 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:54.015 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:54.274 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:54.274 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:54.274 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:54.274 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:54.534 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:54.534 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:54.534 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:54.534 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:31:54.795 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:55.054 No valid GPT data, bailing 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:31:55.054 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb --hostid=801c19ac-fce9-ec11-9bc7-a4bf019282bb -a 10.0.0.1 -t tcp -s 4420 00:31:55.055 00:31:55.055 Discovery Log Number of Records 2, Generation counter 2 00:31:55.055 =====Discovery Log Entry 0====== 00:31:55.055 trtype: tcp 00:31:55.055 adrfam: ipv4 00:31:55.055 subtype: current discovery subsystem 00:31:55.055 treq: not specified, sq flow control disable supported 00:31:55.055 portid: 1 00:31:55.055 trsvcid: 4420 00:31:55.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:55.055 traddr: 10.0.0.1 00:31:55.055 eflags: none 00:31:55.055 sectype: none 00:31:55.055 =====Discovery Log Entry 1====== 00:31:55.055 trtype: tcp 00:31:55.055 adrfam: ipv4 00:31:55.055 subtype: nvme subsystem 00:31:55.055 treq: not specified, sq flow control disable supported 00:31:55.055 portid: 1 00:31:55.055 trsvcid: 4420 00:31:55.055 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:55.055 traddr: 10.0.0.1 00:31:55.055 eflags: none 00:31:55.055 sectype: none 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.055 19:38:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:58.346 Initializing NVMe Controllers 00:31:58.346 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:58.346 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:58.346 Initialization complete. Launching workers. 00:31:58.346 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95403, failed: 0 00:31:58.346 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95403, failed to submit 0 00:31:58.346 success 0, unsuccessful 95403, failed 0 00:31:58.346 19:38:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:58.346 19:38:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:01.639 Initializing NVMe Controllers 00:32:01.639 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:01.639 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:01.639 Initialization complete. Launching workers. 00:32:01.639 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 154440, failed: 0 00:32:01.639 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38830, failed to submit 115610 00:32:01.639 success 0, unsuccessful 38830, failed 0 00:32:01.639 19:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:01.639 19:38:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:04.178 Initializing NVMe Controllers 00:32:04.178 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:04.178 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:04.178 Initialization complete. Launching workers. 00:32:04.178 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145951, failed: 0 00:32:04.178 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36534, failed to submit 109417 00:32:04.178 success 0, unsuccessful 36534, failed 0 00:32:04.178 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:04.178 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:04.178 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:32:04.438 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:04.438 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:04.439 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:04.439 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:04.439 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:32:04.439 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:32:04.439 19:38:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:06.978 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:06.978 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:08.886 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:09.146 00:32:09.146 real 0m17.710s 00:32:09.146 user 0m8.809s 00:32:09.146 sys 0m4.537s 00:32:09.146 19:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.146 19:38:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:09.146 ************************************ 00:32:09.146 END TEST kernel_target_abort 00:32:09.146 ************************************ 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:09.146 rmmod nvme_tcp 00:32:09.146 rmmod nvme_fabrics 00:32:09.146 rmmod nvme_keyring 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 4043060 ']' 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 4043060 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 4043060 ']' 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 4043060 00:32:09.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (4043060) - No such process 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 4043060 is not found' 00:32:09.146 Process with pid 4043060 is not found 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:32:09.146 19:38:42 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:11.679 Waiting for block devices as requested 00:32:11.679 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:11.679 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:11.679 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:11.679 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:11.679 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:11.679 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:11.679 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:11.679 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:11.679 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:11.938 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:11.938 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:11.938 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:11.938 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:12.198 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:12.198 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:12.198 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:12.198 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:12.457 19:38:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:14.991 19:38:48 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:14.991 00:32:14.991 real 0m44.334s 00:32:14.991 user 1m1.238s 00:32:14.991 sys 0m14.069s 00:32:14.991 19:38:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:14.991 19:38:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:14.991 ************************************ 00:32:14.991 END TEST nvmf_abort_qd_sizes 00:32:14.991 ************************************ 00:32:14.991 19:38:48 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:14.991 19:38:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:14.991 19:38:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:14.991 19:38:48 -- common/autotest_common.sh@10 -- # set +x 00:32:14.991 ************************************ 00:32:14.991 START TEST keyring_file 00:32:14.991 ************************************ 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:14.991 * Looking for test storage... 00:32:14.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@345 -- # : 1 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@353 -- # local d=1 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@355 -- # echo 1 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@353 -- # local d=2 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@355 -- # echo 2 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@368 -- # return 0 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:14.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.991 --rc genhtml_branch_coverage=1 00:32:14.991 --rc genhtml_function_coverage=1 00:32:14.991 --rc genhtml_legend=1 00:32:14.991 --rc geninfo_all_blocks=1 00:32:14.991 --rc geninfo_unexecuted_blocks=1 00:32:14.991 00:32:14.991 ' 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:14.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.991 --rc genhtml_branch_coverage=1 00:32:14.991 --rc genhtml_function_coverage=1 00:32:14.991 --rc genhtml_legend=1 00:32:14.991 --rc geninfo_all_blocks=1 00:32:14.991 --rc geninfo_unexecuted_blocks=1 00:32:14.991 00:32:14.991 ' 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:14.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.991 --rc genhtml_branch_coverage=1 00:32:14.991 --rc genhtml_function_coverage=1 00:32:14.991 --rc genhtml_legend=1 00:32:14.991 --rc geninfo_all_blocks=1 00:32:14.991 --rc geninfo_unexecuted_blocks=1 00:32:14.991 00:32:14.991 ' 00:32:14.991 19:38:48 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:14.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:14.991 --rc genhtml_branch_coverage=1 00:32:14.991 --rc genhtml_function_coverage=1 00:32:14.991 --rc genhtml_legend=1 00:32:14.991 --rc geninfo_all_blocks=1 00:32:14.991 --rc geninfo_unexecuted_blocks=1 00:32:14.991 00:32:14.991 ' 00:32:14.991 19:38:48 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:14.991 19:38:48 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:14.991 19:38:48 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:14.991 19:38:48 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.991 19:38:48 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.991 19:38:48 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.991 19:38:48 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:14.991 19:38:48 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:14.991 19:38:48 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:14.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3BrkT4pwr4 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3BrkT4pwr4 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3BrkT4pwr4 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.3BrkT4pwr4 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Nnld9lbn0t 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:14.992 19:38:48 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Nnld9lbn0t 00:32:14.992 19:38:48 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Nnld9lbn0t 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Nnld9lbn0t 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@30 -- # tgtpid=4053588 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@32 -- # waitforlisten 4053588 00:32:14.992 19:38:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4053588 ']' 00:32:14.992 19:38:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.992 19:38:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:14.992 19:38:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.992 19:38:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:14.992 19:38:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:14.992 19:38:48 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:14.992 [2024-11-26 19:38:48.632346] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:32:14.992 [2024-11-26 19:38:48.632419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053588 ] 00:32:14.992 [2024-11-26 19:38:48.705568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.992 [2024-11-26 19:38:48.744610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.559 19:38:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:15.559 19:38:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:15.559 19:38:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:15.559 19:38:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.559 19:38:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:15.559 [2024-11-26 19:38:49.412180] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:15.818 null0 00:32:15.818 [2024-11-26 19:38:49.444235] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:15.818 [2024-11-26 19:38:49.444523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.818 19:38:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.818 19:38:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:15.818 [2024-11-26 19:38:49.472295] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:15.818 request: 00:32:15.818 { 00:32:15.818 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:15.818 "secure_channel": false, 00:32:15.818 "listen_address": { 00:32:15.818 "trtype": "tcp", 00:32:15.818 "traddr": "127.0.0.1", 00:32:15.818 "trsvcid": "4420" 00:32:15.818 }, 00:32:15.819 "method": "nvmf_subsystem_add_listener", 00:32:15.819 "req_id": 1 00:32:15.819 } 00:32:15.819 Got JSON-RPC error response 00:32:15.819 response: 00:32:15.819 { 00:32:15.819 "code": -32602, 00:32:15.819 "message": "Invalid parameters" 00:32:15.819 } 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:15.819 19:38:49 keyring_file -- keyring/file.sh@47 -- # bperfpid=4053925 00:32:15.819 19:38:49 keyring_file -- keyring/file.sh@49 -- # waitforlisten 4053925 /var/tmp/bperf.sock 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4053925 ']' 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:15.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.819 19:38:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:15.819 19:38:49 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:15.819 [2024-11-26 19:38:49.511028] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:32:15.819 [2024-11-26 19:38:49.511077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4053925 ] 00:32:15.819 [2024-11-26 19:38:49.588079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:15.819 [2024-11-26 19:38:49.623977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.807 19:38:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.807 19:38:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:16.807 19:38:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3BrkT4pwr4 00:32:16.807 19:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3BrkT4pwr4 00:32:16.807 19:38:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Nnld9lbn0t 00:32:16.807 19:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Nnld9lbn0t 00:32:16.807 19:38:50 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:32:16.807 19:38:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:16.807 19:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:16.807 19:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:16.807 19:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.122 19:38:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.3BrkT4pwr4 == \/\t\m\p\/\t\m\p\.\3\B\r\k\T\4\p\w\r\4 ]] 00:32:17.122 19:38:50 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:32:17.122 19:38:50 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:32:17.122 19:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.122 19:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.122 19:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:17.122 19:38:50 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Nnld9lbn0t == \/\t\m\p\/\t\m\p\.\N\n\l\d\9\l\b\n\0\t ]] 00:32:17.122 19:38:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:32:17.122 19:38:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.122 19:38:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.122 19:38:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.122 19:38:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.122 19:38:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.424 19:38:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:17.424 19:38:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:32:17.424 19:38:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:17.424 19:38:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.424 19:38:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.424 19:38:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:17.424 19:38:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.424 19:38:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:32:17.424 19:38:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.424 19:38:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:17.683 [2024-11-26 19:38:51.399659] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:17.683 nvme0n1 00:32:17.683 19:38:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:32:17.683 19:38:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:17.683 19:38:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.683 19:38:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.683 19:38:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:17.683 19:38:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:17.942 19:38:51 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:32:17.942 19:38:51 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:32:17.942 19:38:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:17.942 19:38:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:17.942 19:38:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:17.942 19:38:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:17.942 19:38:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:18.202 19:38:51 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:32:18.202 19:38:51 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:18.202 Running I/O for 1 seconds... 00:32:19.140 21433.00 IOPS, 83.72 MiB/s 00:32:19.140 Latency(us) 00:32:19.140 [2024-11-26T18:38:53.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.140 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:19.140 nvme0n1 : 1.00 21480.61 83.91 0.00 0.00 5948.83 2348.37 9065.81 00:32:19.140 [2024-11-26T18:38:53.005Z] =================================================================================================================== 00:32:19.140 [2024-11-26T18:38:53.005Z] Total : 21480.61 83.91 0.00 0.00 5948.83 2348.37 9065.81 00:32:19.140 { 00:32:19.140 "results": [ 00:32:19.140 { 00:32:19.140 "job": "nvme0n1", 00:32:19.140 "core_mask": "0x2", 00:32:19.140 "workload": "randrw", 00:32:19.140 "percentage": 50, 00:32:19.140 "status": "finished", 00:32:19.140 "queue_depth": 128, 00:32:19.140 "io_size": 4096, 00:32:19.140 "runtime": 1.003789, 00:32:19.140 "iops": 21480.60996882811, 00:32:19.140 "mibps": 83.9086326907348, 00:32:19.140 "io_failed": 0, 00:32:19.140 "io_timeout": 0, 00:32:19.140 "avg_latency_us": 5948.833382184708, 00:32:19.140 "min_latency_us": 2348.3733333333334, 00:32:19.140 "max_latency_us": 9065.813333333334 00:32:19.140 } 00:32:19.140 ], 00:32:19.140 "core_count": 1 00:32:19.140 } 00:32:19.140 19:38:52 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:19.140 19:38:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:19.399 19:38:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.400 19:38:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:19.400 19:38:53 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:19.400 19:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.659 19:38:53 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:32:19.659 19:38:53 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:19.659 19:38:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:19.659 19:38:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:19.659 19:38:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:19.659 19:38:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.659 19:38:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:19.659 19:38:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:19.659 19:38:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:19.659 19:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:19.919 [2024-11-26 19:38:53.547749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:19.919 [2024-11-26 19:38:53.548552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18539f0 (107): Transport endpoint is not connected 00:32:19.919 [2024-11-26 19:38:53.549548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18539f0 (9): Bad file descriptor 00:32:19.919 [2024-11-26 19:38:53.550550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:19.919 [2024-11-26 19:38:53.550559] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:19.919 [2024-11-26 19:38:53.550565] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:19.919 [2024-11-26 19:38:53.550571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:19.919 request: 00:32:19.919 { 00:32:19.919 "name": "nvme0", 00:32:19.919 "trtype": "tcp", 00:32:19.919 "traddr": "127.0.0.1", 00:32:19.919 "adrfam": "ipv4", 00:32:19.919 "trsvcid": "4420", 00:32:19.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:19.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:19.919 "prchk_reftag": false, 00:32:19.919 "prchk_guard": false, 00:32:19.919 "hdgst": false, 00:32:19.919 "ddgst": false, 00:32:19.919 "psk": "key1", 00:32:19.919 "allow_unrecognized_csi": false, 00:32:19.919 "method": "bdev_nvme_attach_controller", 00:32:19.919 "req_id": 1 00:32:19.919 } 00:32:19.919 Got JSON-RPC error response 00:32:19.919 response: 00:32:19.919 { 00:32:19.919 "code": -5, 00:32:19.919 "message": "Input/output error" 00:32:19.919 } 00:32:19.919 19:38:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:19.919 19:38:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:19.919 19:38:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:19.919 19:38:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:19.919 19:38:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:32:19.919 19:38:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:19.919 19:38:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.919 19:38:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.919 19:38:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:19.919 19:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.919 19:38:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:19.920 19:38:53 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:32:19.920 19:38:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:19.920 19:38:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:19.920 19:38:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:19.920 19:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:19.920 19:38:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:20.179 19:38:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:32:20.179 19:38:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:32:20.180 19:38:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:20.440 19:38:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:32:20.440 19:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:20.440 19:38:54 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:32:20.440 19:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:20.440 19:38:54 keyring_file -- keyring/file.sh@78 -- # jq length 00:32:20.699 19:38:54 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:32:20.699 19:38:54 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.3BrkT4pwr4 00:32:20.699 19:38:54 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.3BrkT4pwr4 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.3BrkT4pwr4 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3BrkT4pwr4 00:32:20.699 19:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3BrkT4pwr4 00:32:20.699 [2024-11-26 19:38:54.515236] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.3BrkT4pwr4': 0100660 00:32:20.699 [2024-11-26 19:38:54.515255] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:20.699 request: 00:32:20.699 { 00:32:20.699 "name": "key0", 00:32:20.699 "path": "/tmp/tmp.3BrkT4pwr4", 00:32:20.699 "method": "keyring_file_add_key", 00:32:20.699 "req_id": 1 00:32:20.699 } 00:32:20.699 Got JSON-RPC error response 00:32:20.699 response: 00:32:20.699 { 00:32:20.699 "code": -1, 00:32:20.699 "message": "Operation not permitted" 00:32:20.699 } 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:20.699 19:38:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:20.699 19:38:54 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.3BrkT4pwr4 00:32:20.699 19:38:54 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3BrkT4pwr4 00:32:20.699 19:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3BrkT4pwr4 00:32:20.959 19:38:54 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.3BrkT4pwr4 00:32:20.959 19:38:54 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:32:20.959 19:38:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:20.959 19:38:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:20.959 19:38:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:20.959 19:38:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:20.959 19:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.218 19:38:54 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:32:21.218 19:38:54 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.218 19:38:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:32:21.218 19:38:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.218 19:38:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:21.218 19:38:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:21.218 19:38:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:21.218 19:38:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:21.218 19:38:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.218 19:38:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.218 [2024-11-26 19:38:55.004495] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.3BrkT4pwr4': No such file or directory 00:32:21.218 [2024-11-26 19:38:55.004511] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:21.218 [2024-11-26 19:38:55.004525] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:21.218 [2024-11-26 19:38:55.004531] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:32:21.218 [2024-11-26 19:38:55.004537] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:21.218 [2024-11-26 19:38:55.004542] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:21.218 request: 00:32:21.218 { 00:32:21.218 "name": "nvme0", 00:32:21.218 "trtype": "tcp", 00:32:21.218 "traddr": "127.0.0.1", 00:32:21.218 "adrfam": "ipv4", 00:32:21.218 "trsvcid": "4420", 00:32:21.218 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:21.218 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:21.218 "prchk_reftag": false, 00:32:21.218 "prchk_guard": false, 00:32:21.218 "hdgst": false, 00:32:21.218 "ddgst": false, 00:32:21.218 "psk": "key0", 00:32:21.218 "allow_unrecognized_csi": false, 00:32:21.218 "method": "bdev_nvme_attach_controller", 00:32:21.218 "req_id": 1 00:32:21.218 } 00:32:21.218 Got JSON-RPC error response 00:32:21.218 response: 00:32:21.218 { 00:32:21.218 "code": -19, 00:32:21.218 "message": "No such device" 00:32:21.218 } 00:32:21.218 19:38:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:32:21.218 19:38:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:21.218 19:38:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:21.218 19:38:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:21.218 19:38:55 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:32:21.218 19:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:21.478 19:38:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.v25m3jJ3BE 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:21.478 19:38:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:21.478 19:38:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:32:21.478 19:38:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:21.478 19:38:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:21.478 19:38:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:32:21.478 19:38:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.v25m3jJ3BE 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.v25m3jJ3BE 00:32:21.478 19:38:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.v25m3jJ3BE 00:32:21.478 19:38:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v25m3jJ3BE 00:32:21.478 19:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v25m3jJ3BE 00:32:21.738 19:38:55 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.738 19:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:21.738 nvme0n1 00:32:21.738 19:38:55 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:32:21.738 19:38:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:21.738 19:38:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:21.738 19:38:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:21.738 19:38:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:21.738 19:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:21.998 19:38:55 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:32:21.998 19:38:55 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:32:21.998 19:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:22.256 19:38:55 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:32:22.256 19:38:55 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:32:22.256 19:38:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.256 19:38:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.256 19:38:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.256 19:38:56 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:32:22.256 19:38:56 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:32:22.256 19:38:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:22.256 19:38:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:22.256 19:38:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:22.256 19:38:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:22.256 19:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.515 19:38:56 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:32:22.515 19:38:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:22.515 19:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:22.774 19:38:56 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:32:22.774 19:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:22.774 19:38:56 keyring_file -- keyring/file.sh@105 -- # jq length 00:32:22.774 19:38:56 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:32:22.774 19:38:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.v25m3jJ3BE 00:32:22.775 19:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.v25m3jJ3BE 00:32:23.034 19:38:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Nnld9lbn0t 00:32:23.034 19:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Nnld9lbn0t 00:32:23.034 19:38:56 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.034 19:38:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:23.292 nvme0n1 00:32:23.292 19:38:57 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:32:23.292 19:38:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:23.552 19:38:57 keyring_file -- keyring/file.sh@113 -- # config='{ 00:32:23.552 "subsystems": [ 00:32:23.552 { 00:32:23.552 "subsystem": "keyring", 00:32:23.552 "config": [ 00:32:23.552 { 00:32:23.552 "method": "keyring_file_add_key", 00:32:23.552 "params": { 00:32:23.552 "name": "key0", 00:32:23.552 "path": "/tmp/tmp.v25m3jJ3BE" 00:32:23.552 } 00:32:23.552 }, 00:32:23.552 { 00:32:23.552 "method": "keyring_file_add_key", 00:32:23.552 "params": { 00:32:23.552 "name": "key1", 00:32:23.552 "path": "/tmp/tmp.Nnld9lbn0t" 00:32:23.552 } 00:32:23.552 } 00:32:23.552 ] 00:32:23.552 }, 00:32:23.552 { 00:32:23.552 "subsystem": "iobuf", 00:32:23.552 "config": [ 00:32:23.552 { 00:32:23.552 "method": "iobuf_set_options", 00:32:23.552 "params": { 00:32:23.552 "small_pool_count": 8192, 00:32:23.552 "large_pool_count": 1024, 00:32:23.552 "small_bufsize": 8192, 00:32:23.552 "large_bufsize": 135168, 00:32:23.552 "enable_numa": false 00:32:23.552 } 00:32:23.552 } 00:32:23.552 ] 00:32:23.552 }, 00:32:23.552 { 00:32:23.552 "subsystem": "sock", 00:32:23.552 "config": [ 00:32:23.552 { 00:32:23.552 "method": "sock_set_default_impl", 00:32:23.552 "params": { 00:32:23.552 "impl_name": "posix" 00:32:23.552 } 00:32:23.552 }, 00:32:23.552 { 00:32:23.552 "method": "sock_impl_set_options", 00:32:23.552 "params": { 00:32:23.552 "impl_name": "ssl", 00:32:23.552 "recv_buf_size": 4096, 00:32:23.552 "send_buf_size": 4096, 00:32:23.552 "enable_recv_pipe": true, 00:32:23.552 "enable_quickack": false, 00:32:23.552 "enable_placement_id": 0, 00:32:23.552 "enable_zerocopy_send_server": true, 00:32:23.552 "enable_zerocopy_send_client": false, 00:32:23.552 "zerocopy_threshold": 0, 00:32:23.552 "tls_version": 0, 00:32:23.552 "enable_ktls": false 00:32:23.552 } 00:32:23.552 }, 00:32:23.552 { 00:32:23.552 "method": "sock_impl_set_options", 00:32:23.552 "params": { 00:32:23.552 "impl_name": "posix", 00:32:23.552 "recv_buf_size": 2097152, 00:32:23.552 "send_buf_size": 2097152, 00:32:23.552 "enable_recv_pipe": true, 00:32:23.552 "enable_quickack": false, 00:32:23.552 "enable_placement_id": 0, 00:32:23.552 "enable_zerocopy_send_server": true, 00:32:23.552 "enable_zerocopy_send_client": false, 00:32:23.552 "zerocopy_threshold": 0, 00:32:23.552 "tls_version": 0, 00:32:23.552 "enable_ktls": false 00:32:23.552 } 00:32:23.552 } 00:32:23.552 ] 00:32:23.552 }, 00:32:23.552 { 00:32:23.552 "subsystem": "vmd", 00:32:23.552 "config": [] 00:32:23.552 }, 00:32:23.552 { 00:32:23.552 "subsystem": "accel", 00:32:23.552 "config": [ 00:32:23.552 { 00:32:23.552 "method": "accel_set_options", 00:32:23.552 "params": { 00:32:23.552 "small_cache_size": 128, 00:32:23.552 "large_cache_size": 16, 00:32:23.552 "task_count": 2048, 00:32:23.552 "sequence_count": 2048, 00:32:23.552 "buf_count": 2048 00:32:23.552 } 00:32:23.552 } 00:32:23.552 ] 00:32:23.552 }, 00:32:23.552 { 00:32:23.552 "subsystem": "bdev", 00:32:23.552 "config": [ 00:32:23.552 { 00:32:23.552 "method": "bdev_set_options", 00:32:23.552 "params": { 00:32:23.552 "bdev_io_pool_size": 65535, 00:32:23.552 "bdev_io_cache_size": 256, 00:32:23.553 "bdev_auto_examine": true, 00:32:23.553 "iobuf_small_cache_size": 128, 00:32:23.553 "iobuf_large_cache_size": 16 00:32:23.553 } 00:32:23.553 }, 00:32:23.553 { 00:32:23.553 "method": "bdev_raid_set_options", 00:32:23.553 "params": { 00:32:23.553 "process_window_size_kb": 1024, 00:32:23.553 "process_max_bandwidth_mb_sec": 0 00:32:23.553 } 00:32:23.553 }, 00:32:23.553 { 00:32:23.553 "method": "bdev_iscsi_set_options", 00:32:23.553 "params": { 00:32:23.553 "timeout_sec": 30 00:32:23.553 } 00:32:23.553 }, 00:32:23.553 { 00:32:23.553 "method": "bdev_nvme_set_options", 00:32:23.553 "params": { 00:32:23.553 "action_on_timeout": "none", 00:32:23.553 "timeout_us": 0, 00:32:23.553 "timeout_admin_us": 0, 00:32:23.553 "keep_alive_timeout_ms": 10000, 00:32:23.553 "arbitration_burst": 0, 00:32:23.553 "low_priority_weight": 0, 00:32:23.553 "medium_priority_weight": 0, 00:32:23.553 "high_priority_weight": 0, 00:32:23.553 "nvme_adminq_poll_period_us": 10000, 00:32:23.553 "nvme_ioq_poll_period_us": 0, 00:32:23.553 "io_queue_requests": 512, 00:32:23.553 "delay_cmd_submit": true, 00:32:23.553 "transport_retry_count": 4, 00:32:23.553 "bdev_retry_count": 3, 00:32:23.553 "transport_ack_timeout": 0, 00:32:23.553 "ctrlr_loss_timeout_sec": 0, 00:32:23.553 "reconnect_delay_sec": 0, 00:32:23.553 "fast_io_fail_timeout_sec": 0, 00:32:23.553 "disable_auto_failback": false, 00:32:23.553 "generate_uuids": false, 00:32:23.553 "transport_tos": 0, 00:32:23.553 "nvme_error_stat": false, 00:32:23.553 "rdma_srq_size": 0, 00:32:23.553 "io_path_stat": false, 00:32:23.553 "allow_accel_sequence": false, 00:32:23.553 "rdma_max_cq_size": 0, 00:32:23.553 "rdma_cm_event_timeout_ms": 0, 00:32:23.553 "dhchap_digests": [ 00:32:23.553 "sha256", 00:32:23.553 "sha384", 00:32:23.553 "sha512" 00:32:23.553 ], 00:32:23.553 "dhchap_dhgroups": [ 00:32:23.553 "null", 00:32:23.553 "ffdhe2048", 00:32:23.553 "ffdhe3072", 00:32:23.553 "ffdhe4096", 00:32:23.553 "ffdhe6144", 00:32:23.553 "ffdhe8192" 00:32:23.553 ] 00:32:23.553 } 00:32:23.553 }, 00:32:23.553 { 00:32:23.553 "method": "bdev_nvme_attach_controller", 00:32:23.553 "params": { 00:32:23.553 "name": "nvme0", 00:32:23.553 "trtype": "TCP", 00:32:23.553 "adrfam": "IPv4", 00:32:23.553 "traddr": "127.0.0.1", 00:32:23.553 "trsvcid": "4420", 00:32:23.553 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.553 "prchk_reftag": false, 00:32:23.553 "prchk_guard": false, 00:32:23.553 "ctrlr_loss_timeout_sec": 0, 00:32:23.553 "reconnect_delay_sec": 0, 00:32:23.553 "fast_io_fail_timeout_sec": 0, 00:32:23.553 "psk": "key0", 00:32:23.553 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.553 "hdgst": false, 00:32:23.553 "ddgst": false, 00:32:23.553 "multipath": "multipath" 00:32:23.553 } 00:32:23.553 }, 00:32:23.553 { 00:32:23.553 "method": "bdev_nvme_set_hotplug", 00:32:23.553 "params": { 00:32:23.553 "period_us": 100000, 00:32:23.553 "enable": false 00:32:23.553 } 00:32:23.553 }, 00:32:23.553 { 00:32:23.553 "method": "bdev_wait_for_examine" 00:32:23.553 } 00:32:23.553 ] 00:32:23.553 }, 00:32:23.553 { 00:32:23.553 "subsystem": "nbd", 00:32:23.553 "config": [] 00:32:23.553 } 00:32:23.553 ] 00:32:23.553 }' 00:32:23.553 19:38:57 keyring_file -- keyring/file.sh@115 -- # killprocess 4053925 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4053925 ']' 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4053925 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4053925 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4053925' 00:32:23.553 killing process with pid 4053925 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@973 -- # kill 4053925 00:32:23.553 Received shutdown signal, test time was about 1.000000 seconds 00:32:23.553 00:32:23.553 Latency(us) 00:32:23.553 [2024-11-26T18:38:57.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:23.553 [2024-11-26T18:38:57.418Z] =================================================================================================================== 00:32:23.553 [2024-11-26T18:38:57.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:23.553 19:38:57 keyring_file -- common/autotest_common.sh@978 -- # wait 4053925 00:32:23.813 19:38:57 keyring_file -- keyring/file.sh@118 -- # bperfpid=4055732 00:32:23.813 19:38:57 keyring_file -- keyring/file.sh@120 -- # waitforlisten 4055732 /var/tmp/bperf.sock 00:32:23.813 19:38:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 4055732 ']' 00:32:23.813 19:38:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:23.813 19:38:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:23.813 19:38:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:23.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:23.813 19:38:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:23.813 19:38:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:23.813 19:38:57 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:23.813 19:38:57 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:32:23.813 "subsystems": [ 00:32:23.813 { 00:32:23.813 "subsystem": "keyring", 00:32:23.813 "config": [ 00:32:23.813 { 00:32:23.813 "method": "keyring_file_add_key", 00:32:23.813 "params": { 00:32:23.813 "name": "key0", 00:32:23.813 "path": "/tmp/tmp.v25m3jJ3BE" 00:32:23.813 } 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "method": "keyring_file_add_key", 00:32:23.813 "params": { 00:32:23.813 "name": "key1", 00:32:23.813 "path": "/tmp/tmp.Nnld9lbn0t" 00:32:23.813 } 00:32:23.813 } 00:32:23.813 ] 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "subsystem": "iobuf", 00:32:23.813 "config": [ 00:32:23.813 { 00:32:23.813 "method": "iobuf_set_options", 00:32:23.813 "params": { 00:32:23.813 "small_pool_count": 8192, 00:32:23.813 "large_pool_count": 1024, 00:32:23.813 "small_bufsize": 8192, 00:32:23.813 "large_bufsize": 135168, 00:32:23.813 "enable_numa": false 00:32:23.813 } 00:32:23.813 } 00:32:23.813 ] 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "subsystem": "sock", 00:32:23.813 "config": [ 00:32:23.813 { 00:32:23.813 "method": "sock_set_default_impl", 00:32:23.813 "params": { 00:32:23.813 "impl_name": "posix" 00:32:23.813 } 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "method": "sock_impl_set_options", 00:32:23.813 "params": { 00:32:23.813 "impl_name": "ssl", 00:32:23.813 "recv_buf_size": 4096, 00:32:23.813 "send_buf_size": 4096, 00:32:23.813 "enable_recv_pipe": true, 00:32:23.813 "enable_quickack": false, 00:32:23.813 "enable_placement_id": 0, 00:32:23.813 "enable_zerocopy_send_server": true, 00:32:23.813 "enable_zerocopy_send_client": false, 00:32:23.813 "zerocopy_threshold": 0, 00:32:23.813 "tls_version": 0, 00:32:23.813 "enable_ktls": false 00:32:23.813 } 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "method": "sock_impl_set_options", 00:32:23.813 "params": { 00:32:23.813 "impl_name": "posix", 00:32:23.813 "recv_buf_size": 2097152, 00:32:23.813 "send_buf_size": 2097152, 00:32:23.813 "enable_recv_pipe": true, 00:32:23.813 "enable_quickack": false, 00:32:23.813 "enable_placement_id": 0, 00:32:23.813 "enable_zerocopy_send_server": true, 00:32:23.813 "enable_zerocopy_send_client": false, 00:32:23.813 "zerocopy_threshold": 0, 00:32:23.813 "tls_version": 0, 00:32:23.813 "enable_ktls": false 00:32:23.813 } 00:32:23.813 } 00:32:23.813 ] 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "subsystem": "vmd", 00:32:23.813 "config": [] 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "subsystem": "accel", 00:32:23.813 "config": [ 00:32:23.813 { 00:32:23.813 "method": "accel_set_options", 00:32:23.813 "params": { 00:32:23.813 "small_cache_size": 128, 00:32:23.813 "large_cache_size": 16, 00:32:23.813 "task_count": 2048, 00:32:23.813 "sequence_count": 2048, 00:32:23.813 "buf_count": 2048 00:32:23.813 } 00:32:23.813 } 00:32:23.813 ] 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "subsystem": "bdev", 00:32:23.813 "config": [ 00:32:23.813 { 00:32:23.813 "method": "bdev_set_options", 00:32:23.813 "params": { 00:32:23.813 "bdev_io_pool_size": 65535, 00:32:23.813 "bdev_io_cache_size": 256, 00:32:23.813 "bdev_auto_examine": true, 00:32:23.813 "iobuf_small_cache_size": 128, 00:32:23.813 "iobuf_large_cache_size": 16 00:32:23.813 } 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "method": "bdev_raid_set_options", 00:32:23.813 "params": { 00:32:23.813 "process_window_size_kb": 1024, 00:32:23.813 "process_max_bandwidth_mb_sec": 0 00:32:23.813 } 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "method": "bdev_iscsi_set_options", 00:32:23.813 "params": { 00:32:23.813 "timeout_sec": 30 00:32:23.813 } 00:32:23.813 }, 00:32:23.813 { 00:32:23.813 "method": "bdev_nvme_set_options", 00:32:23.813 "params": { 00:32:23.813 "action_on_timeout": "none", 00:32:23.813 "timeout_us": 0, 00:32:23.813 "timeout_admin_us": 0, 00:32:23.813 "keep_alive_timeout_ms": 10000, 00:32:23.813 "arbitration_burst": 0, 00:32:23.813 "low_priority_weight": 0, 00:32:23.813 "medium_priority_weight": 0, 00:32:23.813 "high_priority_weight": 0, 00:32:23.813 "nvme_adminq_poll_period_us": 10000, 00:32:23.813 "nvme_ioq_poll_period_us": 0, 00:32:23.814 "io_queue_requests": 512, 00:32:23.814 "delay_cmd_submit": true, 00:32:23.814 "transport_retry_count": 4, 00:32:23.814 "bdev_retry_count": 3, 00:32:23.814 "transport_ack_timeout": 0, 00:32:23.814 "ctrlr_loss_timeout_sec": 0, 00:32:23.814 "reconnect_delay_sec": 0, 00:32:23.814 "fast_io_fail_timeout_sec": 0, 00:32:23.814 "disable_auto_failback": false, 00:32:23.814 "generate_uuids": false, 00:32:23.814 "transport_tos": 0, 00:32:23.814 "nvme_error_stat": false, 00:32:23.814 "rdma_srq_size": 0, 00:32:23.814 "io_path_stat": false, 00:32:23.814 "allow_accel_sequence": false, 00:32:23.814 "rdma_max_cq_size": 0, 00:32:23.814 "rdma_cm_event_timeout_ms": 0, 00:32:23.814 "dhchap_digests": [ 00:32:23.814 "sha256", 00:32:23.814 "sha384", 00:32:23.814 "sha512" 00:32:23.814 ], 00:32:23.814 "dhchap_dhgroups": [ 00:32:23.814 "null", 00:32:23.814 "ffdhe2048", 00:32:23.814 "ffdhe3072", 00:32:23.814 "ffdhe4096", 00:32:23.814 "ffdhe6144", 00:32:23.814 "ffdhe8192" 00:32:23.814 ] 00:32:23.814 } 00:32:23.814 }, 00:32:23.814 { 00:32:23.814 "method": "bdev_nvme_attach_controller", 00:32:23.814 "params": { 00:32:23.814 "name": "nvme0", 00:32:23.814 "trtype": "TCP", 00:32:23.814 "adrfam": "IPv4", 00:32:23.814 "traddr": "127.0.0.1", 00:32:23.814 "trsvcid": "4420", 00:32:23.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:23.814 "prchk_reftag": false, 00:32:23.814 "prchk_guard": false, 00:32:23.814 "ctrlr_loss_timeout_sec": 0, 00:32:23.814 "reconnect_delay_sec": 0, 00:32:23.814 "fast_io_fail_timeout_sec": 0, 00:32:23.814 "psk": "key0", 00:32:23.814 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:23.814 "hdgst": false, 00:32:23.814 "ddgst": false, 00:32:23.814 "multipath": "multipath" 00:32:23.814 } 00:32:23.814 }, 00:32:23.814 { 00:32:23.814 "method": "bdev_nvme_set_hotplug", 00:32:23.814 "params": { 00:32:23.814 "period_us": 100000, 00:32:23.814 "enable": false 00:32:23.814 } 00:32:23.814 }, 00:32:23.814 { 00:32:23.814 "method": "bdev_wait_for_examine" 00:32:23.814 } 00:32:23.814 ] 00:32:23.814 }, 00:32:23.814 { 00:32:23.814 "subsystem": "nbd", 00:32:23.814 "config": [] 00:32:23.814 } 00:32:23.814 ] 00:32:23.814 }' 00:32:23.814 [2024-11-26 19:38:57.482588] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:32:23.814 [2024-11-26 19:38:57.482633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4055732 ] 00:32:23.814 [2024-11-26 19:38:57.538050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.814 [2024-11-26 19:38:57.567402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:24.073 [2024-11-26 19:38:57.711628] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:24.638 19:38:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:24.638 19:38:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:32:24.638 19:38:58 keyring_file -- keyring/file.sh@121 -- # jq length 00:32:24.638 19:38:58 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:32:24.638 19:38:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.638 19:38:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:24.638 19:38:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:32:24.638 19:38:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:24.638 19:38:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.638 19:38:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.638 19:38:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:24.638 19:38:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.896 19:38:58 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:32:24.896 19:38:58 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:32:24.896 19:38:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:24.896 19:38:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:24.896 19:38:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:24.896 19:38:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:24.896 19:38:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:24.896 19:38:58 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:32:24.896 19:38:58 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:32:24.896 19:38:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:24.896 19:38:58 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:32:25.155 19:38:58 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:32:25.155 19:38:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:25.155 19:38:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.v25m3jJ3BE /tmp/tmp.Nnld9lbn0t 00:32:25.155 19:38:58 keyring_file -- keyring/file.sh@20 -- # killprocess 4055732 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4055732 ']' 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4055732 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4055732 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4055732' 00:32:25.155 killing process with pid 4055732 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@973 -- # kill 4055732 00:32:25.155 Received shutdown signal, test time was about 1.000000 seconds 00:32:25.155 00:32:25.155 Latency(us) 00:32:25.155 [2024-11-26T18:38:59.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.155 [2024-11-26T18:38:59.020Z] =================================================================================================================== 00:32:25.155 [2024-11-26T18:38:59.020Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:25.155 19:38:58 keyring_file -- common/autotest_common.sh@978 -- # wait 4055732 00:32:25.414 19:38:59 keyring_file -- keyring/file.sh@21 -- # killprocess 4053588 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 4053588 ']' 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@958 -- # kill -0 4053588 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@959 -- # uname 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4053588 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4053588' 00:32:25.414 killing process with pid 4053588 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@973 -- # kill 4053588 00:32:25.414 19:38:59 keyring_file -- common/autotest_common.sh@978 -- # wait 4053588 00:32:25.672 00:32:25.672 real 0m10.917s 00:32:25.672 user 0m26.063s 00:32:25.672 sys 0m2.198s 00:32:25.672 19:38:59 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.672 19:38:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:25.672 ************************************ 00:32:25.672 END TEST keyring_file 00:32:25.672 ************************************ 00:32:25.672 19:38:59 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:32:25.673 19:38:59 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:25.673 19:38:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:25.673 19:38:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.673 19:38:59 -- common/autotest_common.sh@10 -- # set +x 00:32:25.673 ************************************ 00:32:25.673 START TEST keyring_linux 00:32:25.673 ************************************ 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:25.673 Joined session keyring: 277753275 00:32:25.673 * Looking for test storage... 00:32:25.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@345 -- # : 1 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@368 -- # return 0 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.673 --rc genhtml_branch_coverage=1 00:32:25.673 --rc genhtml_function_coverage=1 00:32:25.673 --rc genhtml_legend=1 00:32:25.673 --rc geninfo_all_blocks=1 00:32:25.673 --rc geninfo_unexecuted_blocks=1 00:32:25.673 00:32:25.673 ' 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.673 --rc genhtml_branch_coverage=1 00:32:25.673 --rc genhtml_function_coverage=1 00:32:25.673 --rc genhtml_legend=1 00:32:25.673 --rc geninfo_all_blocks=1 00:32:25.673 --rc geninfo_unexecuted_blocks=1 00:32:25.673 00:32:25.673 ' 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.673 --rc genhtml_branch_coverage=1 00:32:25.673 --rc genhtml_function_coverage=1 00:32:25.673 --rc genhtml_legend=1 00:32:25.673 --rc geninfo_all_blocks=1 00:32:25.673 --rc geninfo_unexecuted_blocks=1 00:32:25.673 00:32:25.673 ' 00:32:25.673 19:38:59 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.673 --rc genhtml_branch_coverage=1 00:32:25.673 --rc genhtml_function_coverage=1 00:32:25.673 --rc genhtml_legend=1 00:32:25.673 --rc geninfo_all_blocks=1 00:32:25.673 --rc geninfo_unexecuted_blocks=1 00:32:25.673 00:32:25.673 ' 00:32:25.673 19:38:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:25.673 19:38:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=801c19ac-fce9-ec11-9bc7-a4bf019282bb 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.673 19:38:59 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.673 19:38:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.673 19:38:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.673 19:38:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.673 19:38:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:25.673 19:38:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:25.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:25.673 19:38:59 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:25.673 19:38:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:25.673 19:38:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:25.673 19:38:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:25.673 19:38:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:25.673 19:38:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:25.673 19:38:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:25.673 19:38:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:25.673 19:38:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:25.673 19:38:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:25.673 19:38:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:25.673 19:38:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:25.674 /tmp/:spdk-test:key0 00:32:25.674 19:38:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:32:25.674 19:38:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:25.674 19:38:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:25.674 /tmp/:spdk-test:key1 00:32:25.674 19:38:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=4056169 00:32:25.674 19:38:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:25.674 19:38:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 4056169 00:32:25.674 19:38:59 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 4056169 ']' 00:32:25.674 19:38:59 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.674 19:38:59 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.674 19:38:59 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.674 19:38:59 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.674 19:38:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.932 [2024-11-26 19:38:59.542905] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:32:25.932 [2024-11-26 19:38:59.542949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056169 ] 00:32:25.932 [2024-11-26 19:38:59.599014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:25.932 [2024-11-26 19:38:59.629018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:25.932 19:38:59 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:25.932 19:38:59 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:25.932 19:38:59 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:25.932 19:38:59 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.932 19:38:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:25.932 [2024-11-26 19:38:59.793108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:26.191 null0 00:32:26.191 [2024-11-26 19:38:59.825163] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:26.192 [2024-11-26 19:38:59.825511] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:26.192 19:38:59 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.192 19:38:59 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:26.192 38067338 00:32:26.192 19:38:59 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:26.192 213448149 00:32:26.192 19:38:59 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=4056271 00:32:26.192 19:38:59 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 4056271 /var/tmp/bperf.sock 00:32:26.192 19:38:59 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 4056271 ']' 00:32:26.192 19:38:59 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:26.192 19:38:59 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:26.192 19:38:59 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:26.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:26.192 19:38:59 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:26.192 19:38:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:26.192 19:38:59 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:26.192 [2024-11-26 19:38:59.883659] Starting SPDK v25.01-pre git sha1 c6092c872 / DPDK 24.03.0 initialization... 00:32:26.192 [2024-11-26 19:38:59.883706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4056271 ] 00:32:26.192 [2024-11-26 19:38:59.946923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.192 [2024-11-26 19:38:59.976769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:26.192 19:39:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:26.192 19:39:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:32:26.192 19:39:00 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:26.192 19:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:26.450 19:39:00 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:26.450 19:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:26.715 19:39:00 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:26.715 19:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:26.715 [2024-11-26 19:39:00.509026] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:26.977 nvme0n1 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:26.977 19:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:26.977 19:39:00 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:26.977 19:39:00 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:26.977 19:39:00 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:26.977 19:39:00 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:27.235 19:39:00 keyring_linux -- keyring/linux.sh@25 -- # sn=38067338 00:32:27.235 19:39:00 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:27.235 19:39:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:27.235 19:39:00 keyring_linux -- keyring/linux.sh@26 -- # [[ 38067338 == \3\8\0\6\7\3\3\8 ]] 00:32:27.235 19:39:00 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 38067338 00:32:27.235 19:39:00 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:27.235 19:39:00 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:27.235 Running I/O for 1 seconds... 00:32:28.170 24118.00 IOPS, 94.21 MiB/s 00:32:28.170 Latency(us) 00:32:28.170 [2024-11-26T18:39:02.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.170 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:28.171 nvme0n1 : 1.01 24118.29 94.21 0.00 0.00 5291.46 4369.07 10758.83 00:32:28.171 [2024-11-26T18:39:02.036Z] =================================================================================================================== 00:32:28.171 [2024-11-26T18:39:02.036Z] Total : 24118.29 94.21 0.00 0.00 5291.46 4369.07 10758.83 00:32:28.171 { 00:32:28.171 "results": [ 00:32:28.171 { 00:32:28.171 "job": "nvme0n1", 00:32:28.171 "core_mask": "0x2", 00:32:28.171 "workload": "randread", 00:32:28.171 "status": "finished", 00:32:28.171 "queue_depth": 128, 00:32:28.171 "io_size": 4096, 00:32:28.171 "runtime": 1.005295, 00:32:28.171 "iops": 24118.293635201608, 00:32:28.171 "mibps": 94.21208451250628, 00:32:28.171 "io_failed": 0, 00:32:28.171 "io_timeout": 0, 00:32:28.171 "avg_latency_us": 5291.463477137122, 00:32:28.171 "min_latency_us": 4369.066666666667, 00:32:28.171 "max_latency_us": 10758.826666666666 00:32:28.171 } 00:32:28.171 ], 00:32:28.171 "core_count": 1 00:32:28.171 } 00:32:28.171 19:39:02 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:28.171 19:39:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:28.429 19:39:02 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:28.429 19:39:02 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:28.429 19:39:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:28.429 19:39:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:28.429 19:39:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:28.429 19:39:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:28.687 19:39:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:28.687 19:39:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:28.688 19:39:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:28.688 [2024-11-26 19:39:02.493744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:28.688 [2024-11-26 19:39:02.494525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a7a0 (107): Transport endpoint is not connected 00:32:28.688 [2024-11-26 19:39:02.495520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62a7a0 (9): Bad file descriptor 00:32:28.688 [2024-11-26 19:39:02.496523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:32:28.688 [2024-11-26 19:39:02.496530] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:28.688 [2024-11-26 19:39:02.496536] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:32:28.688 [2024-11-26 19:39:02.496542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:32:28.688 request: 00:32:28.688 { 00:32:28.688 "name": "nvme0", 00:32:28.688 "trtype": "tcp", 00:32:28.688 "traddr": "127.0.0.1", 00:32:28.688 "adrfam": "ipv4", 00:32:28.688 "trsvcid": "4420", 00:32:28.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:28.688 "prchk_reftag": false, 00:32:28.688 "prchk_guard": false, 00:32:28.688 "hdgst": false, 00:32:28.688 "ddgst": false, 00:32:28.688 "psk": ":spdk-test:key1", 00:32:28.688 "allow_unrecognized_csi": false, 00:32:28.688 "method": "bdev_nvme_attach_controller", 00:32:28.688 "req_id": 1 00:32:28.688 } 00:32:28.688 Got JSON-RPC error response 00:32:28.688 response: 00:32:28.688 { 00:32:28.688 "code": -5, 00:32:28.688 "message": "Input/output error" 00:32:28.688 } 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@33 -- # sn=38067338 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 38067338 00:32:28.688 1 links removed 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@33 -- # sn=213448149 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 213448149 00:32:28.688 1 links removed 00:32:28.688 19:39:02 keyring_linux -- keyring/linux.sh@41 -- # killprocess 4056271 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 4056271 ']' 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 4056271 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.688 19:39:02 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4056271 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4056271' 00:32:28.947 killing process with pid 4056271 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@973 -- # kill 4056271 00:32:28.947 Received shutdown signal, test time was about 1.000000 seconds 00:32:28.947 00:32:28.947 Latency(us) 00:32:28.947 [2024-11-26T18:39:02.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.947 [2024-11-26T18:39:02.812Z] =================================================================================================================== 00:32:28.947 [2024-11-26T18:39:02.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@978 -- # wait 4056271 00:32:28.947 19:39:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 4056169 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 4056169 ']' 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 4056169 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 4056169 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 4056169' 00:32:28.947 killing process with pid 4056169 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@973 -- # kill 4056169 00:32:28.947 19:39:02 keyring_linux -- common/autotest_common.sh@978 -- # wait 4056169 00:32:29.206 00:32:29.206 real 0m3.571s 00:32:29.207 user 0m6.784s 00:32:29.207 sys 0m1.175s 00:32:29.207 19:39:02 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.207 19:39:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:29.207 ************************************ 00:32:29.207 END TEST keyring_linux 00:32:29.207 ************************************ 00:32:29.207 19:39:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:29.207 19:39:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:29.207 19:39:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:29.207 19:39:02 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:29.207 19:39:02 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:29.207 19:39:02 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:29.207 19:39:02 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:29.207 19:39:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.207 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:32:29.207 19:39:02 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:29.207 19:39:02 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:29.207 19:39:02 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:29.207 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:32:34.477 INFO: APP EXITING 00:32:34.477 INFO: killing all VMs 00:32:34.477 INFO: killing vhost app 00:32:34.477 INFO: EXIT DONE 00:32:37.008 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:65:00.0 (144d a80a): Already using the nvme driver 00:32:37.008 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:32:37.008 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:32:39.543 Cleaning 00:32:39.543 Removing: /var/run/dpdk/spdk0/config 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:39.543 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:39.543 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:39.543 Removing: /var/run/dpdk/spdk1/config 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:39.543 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:39.543 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:39.543 Removing: /var/run/dpdk/spdk2/config 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:39.543 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:39.543 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:39.543 Removing: /var/run/dpdk/spdk3/config 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:39.543 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:39.543 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:39.543 Removing: /var/run/dpdk/spdk4/config 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:39.544 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:39.544 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:39.544 Removing: /dev/shm/bdev_svc_trace.1 00:32:39.544 Removing: /dev/shm/nvmf_trace.0 00:32:39.544 Removing: /dev/shm/spdk_tgt_trace.pid3455264 00:32:39.544 Removing: /var/run/dpdk/spdk0 00:32:39.544 Removing: /var/run/dpdk/spdk1 00:32:39.544 Removing: /var/run/dpdk/spdk2 00:32:39.544 Removing: /var/run/dpdk/spdk3 00:32:39.544 Removing: /var/run/dpdk/spdk4 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3453393 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3455264 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3456040 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3457503 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3457649 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3458991 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3459034 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3459199 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3460307 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3461087 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3461480 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3461557 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3461964 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3462355 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3462640 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3462765 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3463132 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3463849 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3467422 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3467672 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3467814 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3467824 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3468280 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3468491 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3468891 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3468894 00:32:39.544 Removing: /var/run/dpdk/spdk_pid3469254 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3469266 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3469529 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3469630 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3470076 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3470429 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3470832 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3475353 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3480739 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3493683 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3494499 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3499898 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3500257 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3505650 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3512967 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3517062 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3530009 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3541416 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3543763 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3545094 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3566798 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3571889 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3632812 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3639540 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3647148 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3655793 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3655813 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3656822 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3658068 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3659153 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3659823 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3659959 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3660231 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3660489 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3660494 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3661553 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3662821 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3663877 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3664724 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3664822 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3665160 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3666265 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3667390 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3677982 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3713420 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3718904 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3721378 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3723874 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3724124 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3724143 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3724474 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3724856 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3727196 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3728258 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3728644 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3731347 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3731934 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3732587 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3737808 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3744955 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3744956 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3744957 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3750302 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3761023 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3766474 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3774352 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3775840 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3777678 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3779369 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3785232 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3790696 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3795948 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3805369 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3805515 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3810656 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3810920 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3811255 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3812017 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3812028 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3818207 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3819030 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3824530 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3828043 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3834578 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3841793 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3852361 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3861474 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3861503 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3885291 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3886287 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3886965 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3887643 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3888371 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3889050 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3889725 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3890403 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3895472 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3895817 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3903542 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3903864 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3910851 00:32:39.803 Removing: /var/run/dpdk/spdk_pid3916125 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3928655 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3929334 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3934750 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3935255 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3940984 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3948031 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3951420 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3964227 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3975582 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3977695 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3978914 00:32:40.063 Removing: /var/run/dpdk/spdk_pid3999215 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4004556 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4008273 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4015749 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4015897 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4022031 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4024562 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4027194 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4028612 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4031419 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4032940 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4043273 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4043936 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4044687 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4047542 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4048196 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4048841 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4053588 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4053925 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4055732 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4056169 00:32:40.063 Removing: /var/run/dpdk/spdk_pid4056271 00:32:40.063 Clean 00:32:40.063 19:39:13 -- common/autotest_common.sh@1453 -- # return 0 00:32:40.063 19:39:13 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:40.063 19:39:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:40.063 19:39:13 -- common/autotest_common.sh@10 -- # set +x 00:32:40.063 19:39:13 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:40.063 19:39:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:40.063 19:39:13 -- common/autotest_common.sh@10 -- # set +x 00:32:40.063 19:39:13 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:40.063 19:39:13 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:40.063 19:39:13 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:40.063 19:39:13 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:40.063 19:39:13 -- spdk/autotest.sh@398 -- # hostname 00:32:40.063 19:39:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:40.324 geninfo: WARNING: invalid characters removed from testname! 00:32:58.422 19:39:31 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:00.331 19:39:34 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:02.237 19:39:35 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:03.617 19:39:37 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:05.523 19:39:39 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:06.903 19:39:40 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:08.811 19:39:42 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:08.811 19:39:42 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:08.811 19:39:42 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:33:08.811 19:39:42 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:08.811 19:39:42 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:08.811 19:39:42 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:08.811 + [[ -n 3372877 ]] 00:33:08.811 + sudo kill 3372877 00:33:08.822 [Pipeline] } 00:33:08.840 [Pipeline] // stage 00:33:08.844 [Pipeline] } 00:33:08.858 [Pipeline] // timeout 00:33:08.863 [Pipeline] } 00:33:08.878 [Pipeline] // catchError 00:33:08.883 [Pipeline] } 00:33:08.898 [Pipeline] // wrap 00:33:08.905 [Pipeline] } 00:33:08.918 [Pipeline] // catchError 00:33:08.927 [Pipeline] stage 00:33:08.930 [Pipeline] { (Epilogue) 00:33:08.943 [Pipeline] catchError 00:33:08.945 [Pipeline] { 00:33:08.957 [Pipeline] echo 00:33:08.959 Cleanup processes 00:33:08.965 [Pipeline] sh 00:33:09.251 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:09.251 4068836 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:09.265 [Pipeline] sh 00:33:09.550 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:09.550 ++ grep -v 'sudo pgrep' 00:33:09.550 ++ awk '{print $1}' 00:33:09.550 + sudo kill -9 00:33:09.550 + true 00:33:09.563 [Pipeline] sh 00:33:09.848 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:19.955 [Pipeline] sh 00:33:20.240 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:20.240 Artifacts sizes are good 00:33:20.255 [Pipeline] archiveArtifacts 00:33:20.263 Archiving artifacts 00:33:20.409 [Pipeline] sh 00:33:20.694 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:20.708 [Pipeline] cleanWs 00:33:20.718 [WS-CLEANUP] Deleting project workspace... 00:33:20.718 [WS-CLEANUP] Deferred wipeout is used... 00:33:20.724 [WS-CLEANUP] done 00:33:20.726 [Pipeline] } 00:33:20.746 [Pipeline] // catchError 00:33:20.759 [Pipeline] sh 00:33:21.041 + logger -p user.info -t JENKINS-CI 00:33:21.050 [Pipeline] } 00:33:21.065 [Pipeline] // stage 00:33:21.070 [Pipeline] } 00:33:21.086 [Pipeline] // node 00:33:21.092 [Pipeline] End of Pipeline 00:33:21.132 Finished: SUCCESS